NASA Technical Reports Server (NTRS)
Lucas, S. H.; Davis, R. C.
1992-01-01
A user's manual is presented for MacPASCO, which is an interactive, graphic, preprocessor for panel design. MacPASCO creates input for PASCO, an existing computer code for structural analysis and sizing of longitudinally stiffened composite panels. MacPASCO provides a graphical user interface which simplifies the specification of panel geometry and reduces user input errors. The user draws the initial structural geometry and reduces user input errors. The user draws the initial structural geometry on the computer screen, then uses a combination of graphic and text inputs to: refine the structural geometry; specify information required for analysis such as panel load and boundary conditions; and define design variables and constraints for minimum mass optimization. Only the use of MacPASCO is described, since the use of PASCO has been documented elsewhere.
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1982-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry 1 gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion.
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1983-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry I gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion. Previously announced in STAR as N82-30552
Barnette, Daniel W.
2002-01-01
The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.
A strategy for reducing gross errors in the generalized Born models of implicit solvation
Onufriev, Alexey V.; Sigalov, Grigori
2011-01-01
The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947
Adaptive optics system performance approximations for atmospheric turbulence correction
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1990-10-01
Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1992-01-01
A consider error covariance analysis was performed in order to investigate the orbit-determination performance attainable using two-way (coherent) 8.4-GHz (X-band) Doppler data for two segments of the planned Mars Observer trajectory. The analysis includes the effects of the current level of calibration errors in tropospheric delay, ionospheric delay, and station locations, with particular emphasis placed on assessing the performance of several candidate elevation-dependent data-weighting functions. One weighting function was found that yields good performance for a variety of tracking geometries. This weighting function is simple and robust; it reduces the danger of error that might exist if an analyst had to select one of several different weighting functions that are highly sensitive to the exact choice of parameters and to the tracking geometry. Orbit-determination accuracy improvements that may be obtained through the use of calibration data derived from Global Positioning System (GPS) satellites also were investigated, and can be as much as a factor of three in some components of the spacecraft state vector. Assuming that both station-location errors and troposphere calibration errors are reduced simultaneously, the recommended data-weighting function need not be changed when GPS calibrations are incorporated in the orbit-determination process.
NASA Astrophysics Data System (ADS)
Ragon, Théa; Sladen, Anthony; Simons, Mark
2018-05-01
The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)
Students’ Errors in Geometry Viewed from Spatial Intelligence
NASA Astrophysics Data System (ADS)
Riastuti, N.; Mardiyana, M.; Pramudya, I.
2017-09-01
Geometry is one of the difficult materials because students must have ability to visualize, describe images, draw shapes, and know the kind of shapes. This study aim is to describe student error based on Newmans’ Error Analysis in solving geometry problems viewed from spatial intelligence. This research uses descriptive qualitative method by using purposive sampling technique. The datas in this research are the result of geometri material test and interview by the 8th graders of Junior High School in Indonesia. The results of this study show that in each category of spatial intelligence has a different type of error in solving the problem on the material geometry. Errors are mostly made by students with low spatial intelligence because they have deficiencies in visual abilities. Analysis of student error viewed from spatial intelligence is expected to help students do reflection in solving the problem of geometry.
Errors Analysis of Students in Mathematics Department to Learn Plane Geometry
NASA Astrophysics Data System (ADS)
Mirna, M.
2018-04-01
This article describes the results of qualitative descriptive research that reveal the locations, types and causes of student error in answering the problem of plane geometry at the problem-solving level. Answers from 59 students on three test items informed that students showed errors ranging from understanding the concepts and principles of geometry itself to the error in applying it to problem solving. Their type of error consists of concept errors, principle errors and operational errors. The results of reflection with four subjects reveal the causes of the error are: 1) student learning motivation is very low, 2) in high school learning experience, geometry has been seen as unimportant, 3) the students' experience using their reasoning in solving the problem is very less, and 4) students' reasoning ability is still very low.
Accurate characterisation of hole size and location by projected fringe profilometry
NASA Astrophysics Data System (ADS)
Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.
2018-06-01
The ability to accurately estimate the location and geometry of holes is often required in the field of quality control and automated assembly. Projected fringe profilometry is a potentially attractive technique on account of being non-contacting, of lower cost, and orders of magnitude faster than the traditional coordinate measuring machine. However, we demonstrate in this paper that fringe projection is susceptible to significant (hundreds of µm) measurement artefacts in the neighbourhood of hole edges, which give rise to errors of a similar magnitude in the estimated hole geometry. A mechanism for the phenomenon is identified based on the finite size of the imaging system’s point spread function and the resulting bias produced near to sample discontinuities in geometry and reflectivity. A mathematical model is proposed, from which a post-processing compensation algorithm is developed to suppress such errors around the holes. The algorithm includes a robust and accurate sub-pixel edge detection method based on a Fourier descriptor of the hole contour. The proposed algorithm was found to reduce significantly the measurement artefacts near the hole edges. As a result, the errors in estimated hole radius were reduced by up to one order of magnitude, to a few tens of µm for hole radii in the range 2–15 mm, compared to those from the uncompensated measurements.
Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.
Yang, Yingdong; Mao, Xuchu; Tian, Weifeng
2016-06-08
Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.
Patterned wafer geometry grouping for improved overlay control
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Woo, Jaeson; Park, Junbeom; Song, Changrock; Anis, Fatima; Vukkadala, Pradeep; Jeon, Sanghuck; Choi, DongSub; Huang, Kevin; Heo, Hoyoung; Smith, Mark D.; Robinson, John C.
2017-03-01
Process-induced overlay errors from outside the litho cell have become a significant contributor to the overlay error budget including non-uniform wafer stress. Previous studies have shown the correlation between process-induced stress and overlay and the opportunity for improvement in process control, including the use of patterned wafer geometry (PWG) metrology to reduce stress-induced overlay signatures. Key challenges of volume semiconductor manufacturing are how to improve not only the magnitude of these signatures, but also the wafer to wafer variability. This work involves a novel technique of using PWG metrology to provide improved litho-control by wafer-level grouping based on incoming process induced overlay, relevant for both 3D NAND and DRAM. Examples shown in this study are from 19 nm DRAM manufacturing.
Constitutive parameter measurements of lossy materials
NASA Technical Reports Server (NTRS)
Dominek, A.; Park, A.
1989-01-01
The electrical constitutive parameters of lossy materials are considered. A discussion of the NRL arch for lossy coatings is presented involving analytical analyses of the reflected field using the geometrical theory of diffraction (GTD) and physical optics (PO). The actual values for these parameters can be obtained through a traditional transmission technique which is examined from an error analysis standpoint. Alternate sample geometries are suggested for this technique to reduce sample tolerance requirements for accurate parameter determination. The performance for one alternate geometry is given.
Determination of head conductivity frequency response in vivo with optimized EIT-EEG.
Dabek, Juhani; Kalogianni, Konstantina; Rotgans, Edwin; van der Helm, Frans C T; Kwakkel, Gert; van Wegen, Erwin E H; Daffertshofer, Andreas; de Munck, Jan C
2016-02-15
Electroencephalography (EEG) benefits from accurate head models. Dipole source modelling errors can be reduced from over 1cm to a few millimetres by replacing generic head geometry and conductivity with tailored ones. When adequate head geometry is available, electrical impedance tomography (EIT) can be used to infer the conductivities of head tissues. In this study, the boundary element method (BEM) is applied with three-compartment (scalp, skull and brain) subject-specific head models. The optimal injection of small currents to the head with a modular EIT current injector, and voltage measurement by an EEG amplifier is first sought by simulations. The measurement with a 64-electrode EEG layout is studied with respect to three noise sources affecting EIT: background EEG, deviations from the fitting assumption of equal scalp and brain conductivities, and smooth model geometry deviations from the true head geometry. The noise source effects were investigated depending on the positioning of the injection and extraction electrode and the number of their combinations used sequentially. The deviation from equal scalp and brain conductivities produces rather deterministic errors in the three conductivities irrespective of the current injection locations. With a realistic measurement of around 2 min and around 8 distant distinct current injection pairs, the error from the other noise sources is reduced to around 10% or less in the skull conductivity. The analysis of subsequent real measurements, however, suggests that there could be subject-specific local thinnings in the skull, which could amplify the conductivity fitting errors. With proper analysis of multiplexed sinusoidal EIT current injections, the measurements on average yielded conductivities of 340 mS/m (scalp and brain) and 6.6 mS/m (skull) at 2 Hz. From 11 to 127 Hz, the conductivities increased by 1.6% (scalp and brain) and 6.7% (skull) on the average. The proper analysis was ensured by using recombination of the current injections into virtual ones, avoiding problems in location-specific skull morphology variations. The observed large intersubject variations support the need for in vivo measurement of skull conductivity, resulting in calibrated subject-specific head models. Copyright © 2015 Elsevier Inc. All rights reserved.
SU-E-J-15: A Patient-Centered Scheme to Mitigate Impacts of Treatment Setup Error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; Southern Medical University, Guangzhou; Tian, Z
2014-06-01
Purpose: Current Intensity Modulated Radiation Therapy (IMRT) is plan-centered. At each treatment fraction, we position the patient to match the setup in treatment plan. Inaccurate setup can compromise delivered dose distribution, and hence leading to suboptimal treatments. Moreover, current setup approach via couch shift under image guidance can correct translational errors, while rotational and deformation errors are hard to address. To overcome these problems, we propose in this abstract a patient-centered scheme to mitigate impacts of treatment setup errors. Methods: In the patient-centered scheme, we first position the patient on the couch approximately matching the planned-setup. Our Supercomputing Online Replanningmore » Environment (SCORE) is then employed to design an optimal treatment plan based on the daily patient geometry. It hence mitigates the impacts of treatment setup error and reduces the requirements on setup accuracy. We have conducted simulations studies in 10 head-and-neck (HN) patients to investigate the feasibility of this scheme. Rotational and deformation setup errors were simulated. Specifically, 1, 3, 5, 7 degrees of rotations were put on pitch, roll, and yaw directions; deformation errors were simulated by splitting neck movements into four basic types: rotation, lateral bending, flexion and extension. Setup variation ranges are based on observed numbers in previous studies. Dosimetric impacts of our scheme were evaluated on PTVs and OARs in comparison with original plan dose with original geometry and original plan recalculated dose with new setup geometries. Results: With conventional plan-centered approach, setup error could lead to significant PTV D99 decrease (−0.25∼+32.42%) and contralateral-parotid Dmean increase (−35.09∼+42.90%). The patientcentered approach is effective in mitigating such impacts to 0∼+0.20% and −0.03∼+5.01%, respectively. Computation time is <128 s. Conclusion: Patient-centered scheme is proposed to mitigate setup error impacts using replanning. Its superiority in terms of dosimetric impacts and feasibility has been shown through simulation studies on HN cases.« less
NASA Technical Reports Server (NTRS)
Snyder, G. Jeffrey (Inventor)
2015-01-01
A high temperature Seebeck coefficient measurement apparatus and method with various features to minimize typical sources of errors is described. Common sources of temperature and voltage measurement errors which may impact accurate measurement are identified and reduced. Applying the identified principles, a high temperature Seebeck measurement apparatus and method employing a uniaxial, four-point geometry is described to operate from room temperature up to 1300K. These techniques for non-destructive Seebeck coefficient measurements are simple to operate, and are suitable for bulk samples with a broad range of physical types and shapes.
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Feng, Pin-Hao; Lagutin, Sergei A.
2000-01-01
In this report, we propose a new geometry for low-noise, increased-strength helical gears of the Novikov-Wildhaber type. Contact stresses are reduced as a result of their convex-concave gear tooth surfaces. The gear tooth surfaces are crowned in the profile direction to localize bearing contact and in the longitudinal direction to obtain a parabolic function of transmission errors. Such a function results in the reduction of noise and vibrations. Methods for the generation of the proposed gear tooth surfaces by grinding and hobbing are considered, and a tooth contact analysis (TCA) computer program to simulate meshing and contact is applied. The report also investigates the influence of misalignment on transmission errors and shift of bearing contact. Numerical examples to illustrate the developed approaches are proposed. The proposed geometry was patented by Ford/UIC (Serial Number 09-340-824, pending) on June 28, 1999.
The slip-and-slide algorithm: a refinement protocol for detector geometry
Ginn, Helen Mary; Stuart, David Ian
2017-01-01
Geometry correction is traditionally plagued by mis-fitting of correlated parameters, leading to local minima which prevent further improvements. Segmented detectors pose an enhanced risk of mis-fitting: even a minor confusion of detector distance and panel separation can prevent improvement in data quality. The slip-and-slide algorithm breaks down effects of the correlated parameters and their associated target functions in a fundamental shift in the approach to the problem. Parameters are never refined against the components of the data to which they are insensitive, providing a dramatic boost in the exploitation of information from a very small number of diffraction patterns. This algorithm can be applied to exploit the adherence of the spot-finding results prior to indexing to a given lattice using unit-cell dimensions as a restraint. Alternatively, it can be applied to the predicted spot locations and the observed reflection positions after indexing from a smaller number of images. Thus, the indexing rate can be boosted by 5.8% using geometry refinement from only 125 indexed patterns or 500 unindexed patterns. In one example of cypovirus type 17 polyhedrin diffraction at the Linac Coherent Light Source, this geometry refinement reveals a detector tilt of 0.3° (resulting in a maximal Z-axis error of ∼0.5 mm from an average detector distance of ∼90 mm) whilst treating all panels independently. Re-indexing and integrating with updated detector geometry reduces systematic errors providing a boost in anomalous signal of sulfur atoms by 20%. Due to the refinement of decoupled parameters, this geometry method also reaches convergence. PMID:29091058
NASA Astrophysics Data System (ADS)
Chang, Jina; Tian, Zhen; Lu, Weiguo; Gu, Xuejun; Chen, Mingli; Jiang, Steve B.
2017-05-01
Multi-atlas segmentation (MAS) has been widely used to automate the delineation of organs at risk (OARs) for radiotherapy. Label fusion is a crucial step in MAS to cope with the segmentation variabilities among multiple atlases. However, most existing label fusion methods do not consider the potential dosimetric impact of the segmentation result. In this proof-of-concept study, we propose a novel geometry-dosimetry label fusion method for MAS-based OAR auto-contouring, which evaluates the segmentation performance in terms of both geometric accuracy and the dosimetric impact of the segmentation accuracy on the resulting treatment plan. Differently from the original selective and iterative method for performance level estimation (SIMPLE), we evaluated and rejected the atlases based on both Dice similarity coefficient and the predicted error of the dosimetric endpoints. The dosimetric error was predicted using our previously developed geometry-dosimetry model. We tested our method in MAS-based rectum auto-contouring on 20 prostate cancer patients. The accuracy in the rectum sub-volume close to the planning tumor volume (PTV), which was found to be a dosimetric sensitive region of the rectum, was greatly improved. The mean absolute distance between the obtained contour and the physician-drawn contour in the rectum sub-volume 2 mm away from PTV was reduced from 3.96 mm to 3.36 mm on average for the 20 patients, with the maximum decrease found to be from 9.22 mm to 3.75 mm. We also compared the dosimetric endpoints predicted for the obtained contours with those predicted for the physician-drawn contours. Our method led to smaller dosimetric endpoint errors than the SIMPLE method in 15 patients, comparable errors in 2 patients, and slightly larger errors in 3 patients. These results indicated the efficacy of our method in terms of considering both geometric accuracy and dosimetric impact during label fusion. Our algorithm can be applied to different tumor sites and radiation treatments, given a specifically trained geometry-dosimetry model.
Chang, Jina; Tian, Zhen; Lu, Weiguo; Gu, Xuejun; Chen, Mingli; Jiang, Steve B
2017-05-07
Multi-atlas segmentation (MAS) has been widely used to automate the delineation of organs at risk (OARs) for radiotherapy. Label fusion is a crucial step in MAS to cope with the segmentation variabilities among multiple atlases. However, most existing label fusion methods do not consider the potential dosimetric impact of the segmentation result. In this proof-of-concept study, we propose a novel geometry-dosimetry label fusion method for MAS-based OAR auto-contouring, which evaluates the segmentation performance in terms of both geometric accuracy and the dosimetric impact of the segmentation accuracy on the resulting treatment plan. Differently from the original selective and iterative method for performance level estimation (SIMPLE), we evaluated and rejected the atlases based on both Dice similarity coefficient and the predicted error of the dosimetric endpoints. The dosimetric error was predicted using our previously developed geometry-dosimetry model. We tested our method in MAS-based rectum auto-contouring on 20 prostate cancer patients. The accuracy in the rectum sub-volume close to the planning tumor volume (PTV), which was found to be a dosimetric sensitive region of the rectum, was greatly improved. The mean absolute distance between the obtained contour and the physician-drawn contour in the rectum sub-volume 2 mm away from PTV was reduced from 3.96 mm to 3.36 mm on average for the 20 patients, with the maximum decrease found to be from 9.22 mm to 3.75 mm. We also compared the dosimetric endpoints predicted for the obtained contours with those predicted for the physician-drawn contours. Our method led to smaller dosimetric endpoint errors than the SIMPLE method in 15 patients, comparable errors in 2 patients, and slightly larger errors in 3 patients. These results indicated the efficacy of our method in terms of considering both geometric accuracy and dosimetric impact during label fusion. Our algorithm can be applied to different tumor sites and radiation treatments, given a specifically trained geometry-dosimetry model.
NASA Astrophysics Data System (ADS)
Ding, Lei; Lai, Yuan; He, Bin
2005-01-01
It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.
NASA Astrophysics Data System (ADS)
Koon, Daniel W.; Heřmanová, Martina; Náhlík, Josef
2015-11-01
We have undertaken the first systematic computational and experimental study of the sensitivity of charge transport measurement to local physical defects for van der Pauw circular and square cloverleafs with rounded internal corners and unclovered geometries, using copper-foil specimens. Cloverleafs with rounded internal corners are in common use and reduce sampling of the material near their boundaries, an advantage over sharp corners. We have defined two parameters for these cloverleafs, one of which, the ‘admittance’, is the best predictor of the sensitivity at the center of these specimens, with this sensitivity depending only weakly on the central ‘core’ size when its diameter is less than about 60% of the specimen’s lateral size. Resistive measurement errors in all four geometries are linear in areas for errors up to about 50% in sheet resistance, and superlinear above. An ASTM-based ‘standard’ cloverleaf geometry, in which the central core diameter of the specimen is 1/5 the overall length and the slit widths are 1/10 the overall length, narrows the effective area sampled by the resistive measurement by a factor of about 16 × in the small-hole limit and over 40 × for larger holes, relative to unclovered goemetries, whether square or circular, with a smooth transition in these numbers for geometries intermediate between the standard cloverleaf and unclovered specimens. We believe that this work will allow materials scientists to better estimate the impact of factors such as the uniformity of film thickness and of material purity on their measurements, and allow sensor designers to better choose an optimal specimen geometry.
An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling
NASA Astrophysics Data System (ADS)
Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd
2017-10-01
Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.
An optical fiber spool for laser stabilization with reduced acceleration sensitivity to 10-12/g
NASA Astrophysics Data System (ADS)
Hu, Yong-Qi; Dong, Jing; Huang, Jun-Chao; Li, Tang; Liu, Liang
2015-10-01
Environmental vibration causes mechanical deformation in optical fibers, which induces excess frequency noise in fiber-stabilized lasers. In order to solve such a problem, we propose an ultralow acceleration sensitivity fiber spool with symmetrically mounted structure. By numerical analysis with the finite element method, we obtain the optimal geometry parameters of the spool with which the horizontal and vertical acceleration sensitivity can be reduced to 3.25 × 10-12/g and 5.38 × 10-12/g respectively. Moreover, the structure features the insensitivity to the variation of geometry parameters, which will minimize the influence from numerical simulation error and manufacture tolerance. Project supported by the National Natural Science Foundation of China (Grant Nos. 11034008 and 11274324) and the Key Research Program of the Chinese Academy of Sciences (Grant No. KJZD-EW-W02).
Progress in NEXT Ion Optics Modeling
NASA Technical Reports Server (NTRS)
Emhoff, Jerold W.; Boyd, Iain D.
2004-01-01
Results are presented from an ion optics simulation code applied to the NEXT ion thruster geometry. The error in the potential field solver of the code is characterized, and methods and requirements for reducing this error are given. Results from a study on electron backstreaming using the improved field solver are given and shown to compare much better to experimental results than previous studies. Results are also presented on a study of the beamlet behavior in the outer radial apertures of the NEXT thruster. The low beamlet currents in this region allow over-focusing of the beam, causing direct impingement of ions on the accelerator grid aperture wall. Different possibilities for reducing this direct impingement are analyzed, with the conclusion that, of the methods studied, decreasing the screen grid aperture diameter eliminates direct impingement most effectively.
A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry
NASA Astrophysics Data System (ADS)
Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.
2018-03-01
Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egan, A; Laub, W
2014-06-15
Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less
Computer-Aided Evaluation of Blood Vessel Geometry From Acoustic Images.
Lindström, Stefan B; Uhlin, Fredrik; Bjarnegård, Niclas; Gylling, Micael; Nilsson, Kamilla; Svensson, Christina; Yngman-Uhlin, Pia; Länne, Toste
2018-04-01
A method for computer-aided assessment of blood vessel geometries based on shape-fitting algorithms from metric vision was evaluated. Acoustic images of cross sections of the radial artery and cephalic vein were acquired, and medical practitioners used a computer application to measure the wall thickness and nominal diameter of these blood vessels with a caliper method and the shape-fitting method. The methods performed equally well for wall thickness measurements. The shape-fitting method was preferable for measuring the diameter, since it reduced systematic errors by up to 63% in the case of the cephalic vein because of its eccentricity. © 2017 by the American Institute of Ultrasound in Medicine.
The effect of surface anisotropy and viewing geometry on the estimation of NDVI from AVHRR
Meyer, David; Verstraete, M.; Pinty, B.
1995-01-01
Since terrestrial surfaces are anisotropic, all spectral reflectance measurements obtained with a small instantaneous field of view instrument are specific to these angular conditions, and the value of the corresponding NDVI, computed from these bidirectional reflectances, is relative to the particular geometry of illumination and viewing at the time of the measurement. This paper documents the importance of these geometric effects through simulations of the AVHRR data acquisition process, and investigates the systematic biases that result from the combination of ecosystem-specific anisotropies with instrument-specific sampling capabilities. Typical errors in the value of NDVI are estimated, and strategies to reduce these effects are explored. -from Authors
Polyhedral Interpolation for Optimal Reaction Control System Jet Selection
NASA Technical Reports Server (NTRS)
Gefert, Leon P.; Wright, Theodore
2014-01-01
An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.
Robust quantum logic in neutral atoms via adiabatic Rydberg dressing
Keating, Tyler; Cook, Robert L.; Hankin, Aaron M.; ...
2015-01-28
We study a scheme for implementing a controlled-Z (CZ) gate between two neutral-atom qubits based on the Rydberg blockade mechanism in a manner that is robust to errors caused by atomic motion. By employing adiabatic dressing of the ground electronic state, we can protect the gate from decoherence due to random phase errors that typically arise because of atomic thermal motion. In addition, the adiabatic protocol allows for a Doppler-free configuration that involves counterpropagating lasers in a σ +/σ - orthogonal polarization geometry that further reduces motional errors due to Doppler shifts. The residual motional error is dominated by dipole-dipolemore » forces acting on doubly-excited Rydberg atoms when the blockade is imperfect. As a result, for reasonable parameters, with qubits encoded into the clock states of 133Cs, we predict that our protocol could produce a CZ gate in < 10 μs with error probability on the order of 10 -3.« less
Paton, Robert S; Goodman, Jonathan M
2009-04-01
We have evaluated the performance of a set of widely used force fields by calculating the geometries and stabilization energies for a large collection of intermolecular complexes. These complexes are representative of a range of chemical and biological systems for which hydrogen bonding, electrostatic, and van der Waals interactions play important roles. Benchmark energies are taken from the high-level ab initio values in the JSCH-2005 and S22 data sets. All of the force fields underestimate stabilization resulting from hydrogen bonding, but the energetics of electrostatic and van der Waals interactions are described more accurately. OPLSAA gave a mean unsigned error of 2 kcal mol(-1) for all 165 complexes studied, and outperforms DFT calculations employing very large basis sets for the S22 complexes. The magnitude of hydrogen bonding interactions are severely underestimated by all of the force fields tested, which contributes significantly to the overall mean error; if complexes which are predominantly bound by hydrogen bonding interactions are discounted, the mean unsigned error of OPLSAA is reduced to 1 kcal mol(-1). For added clarity, web-based interactive displays of the results have been developed which allow comparisons of force field and ab initio geometries to be performed and the structures viewed and rotated in three dimensions.
Quantum error-correcting codes from algebraic geometry codes of Castle type
NASA Astrophysics Data System (ADS)
Munuera, Carlos; Tenório, Wanderson; Torres, Fernando
2016-10-01
We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.
Active full-shell grazing-incidence optics
NASA Astrophysics Data System (ADS)
Roche, Jacqueline M.; Elsner, Ronald F.; Ramsey, Brian D.; O'Dell, Stephen L.; Kolodziejczak, Jeffrey J.; Weisskopf, Martin C.; Gubarev, Mikhail V.
2016-09-01
MSFC has a long history of developing full-shell grazing-incidence x-ray optics for both narrow (pointed) and wide field (surveying) applications. The concept presented in this paper shows the potential to use active optics to switch between narrow and wide-field geometries, while maintaining large effective area and high angular resolution. In addition, active optics has the potential to reduce errors due to mounting and manufacturing lightweight optics. The design presented corrects low spatial frequency error and has significantly fewer actuators than other concepts presented thus far in the field of active x-ray optics. Using a finite element model, influence functions are calculated using active components on a full-shell grazing-incidence optic. Next, the ability of the active optic to effect a change of optical prescription and to correct for errors due to manufacturing and mounting is modeled.
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Active Full-Shell Grazing-Incidence Optics
NASA Technical Reports Server (NTRS)
Davis, Jacqueline M.; Elsner, Ronald F.; Ramsey, Brian D.; O'Dell, Stephen L.; Kolodziejczak, Jeffery; Weisskopf, Martin C.; Gubarev, Mikhail V.
2016-01-01
MSFC has a long history of developing full-shell grazing-incidence x-ray optics for both narrow (pointed) and wide field (surveying) applications. The concept presented in this paper shows the potential to use active optics to switch between narrow and wide-field geometries, while maintaining large effective area and high angular resolution. In addition, active optics has the potential to reduce errors due to mounting and manufacturing lightweight optics. The design presented corrects low spatial frequency error and has significantly fewer actuators than other concepts presented thus far in the field of active x-ray optics. Using a finite element model, influence functions are calculated using active components on a full-shell grazing-incidence optic. Next, the ability of the active optic to effect a change of optical prescription and to correct for errors due to manufacturing and mounting is modeled.
Minimizing pulling geometry errors in atomic force microscope single molecule force spectroscopy.
Rivera, Monica; Lee, Whasil; Ke, Changhong; Marszalek, Piotr E; Cole, Daniel G; Clark, Robert L
2008-10-01
In atomic force microscopy-based single molecule force spectroscopy (AFM-SMFS), it is assumed that the pulling angle is negligible and that the force applied to the molecule is equivalent to the force measured by the instrument. Recent studies, however, have indicated that the pulling geometry errors can drastically alter the measured force-extension relationship of molecules. Here we describe a software-based alignment method that repositions the cantilever such that it is located directly above the molecule's substrate attachment site. By aligning the applied force with the measurement axis, the molecule is no longer undergoing combined loading, and the full force can be measured by the cantilever. Simulations and experimental results verify the ability of the alignment program to minimize pulling geometry errors in AFM-SMFS studies.
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Tsay, Chung-Biau
1987-01-01
The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.
Ultrawideband asynchronous tracking system and method
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Ngo, Phong H. (Inventor); Phan, Chau T. (Inventor); Gross, Julia A. (Inventor); Ni, Jianjun (Inventor); Dusl, John (Inventor)
2012-01-01
A passive tracking system is provided with a plurality of ultrawideband (UWB) receivers that is asynchronous with respect to a UWB transmitter. A geometry of the tracking system may utilize a plurality of clusters with each cluster comprising a plurality of antennas. Time Difference of Arrival (TDOA) may be determined for the antennas in each cluster and utilized to determine Angle of Arrival (AOA) based on a far field assumption regarding the geometry. Parallel software communication sockets may be established with each of the plurality of UWB receivers. Transfer of waveform data may be processed by alternately receiving packets of waveform data from each UWB receiver. Cross Correlation Peak Detection (CCPD) is utilized to estimate TDOA information to reduce errors in a noisy, multipath environment.
Multi-spectral pyrometer for gas turbine blade temperature measurement
NASA Astrophysics Data System (ADS)
Gao, Shan; Wang, Lixin; Feng, Chi
2014-09-01
To achieve the highest possible turbine inlet temperature requires to accurately measuring the turbine blade temperature. If the temperature of blade frequent beyond the design limits, it will seriously reduce the service life. The problem for the accuracy of the temperature measurement includes the value of the target surface emissivity is unknown and the emissivity model is variability and the thermal radiation of the high temperature environment. In this paper, the multi-spectral pyrometer is designed provided mainly for range 500-1000°, and present a model corrected in terms of the error due to the reflected radiation only base on the turbine geometry and the physical properties of the material. Under different working conditions, the method can reduce the measurement error from the reflect radiation of vanes, make measurement closer to the actual temperature of the blade and calculating the corresponding model through genetic algorithm. The experiment shows that this method has higher accuracy measurements.
Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.
Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc
2017-10-01
The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.
Synergism of the method of characteristics and CAD technology for neutron transport calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Z.; Wang, D.; He, T.
2013-07-01
The method of characteristics (MOC) is a very popular methodology in neutron transport calculation and numerical simulation in recent decades for its unique advantages. One of the key problems determining whether the MOC can be applied in complicated and highly heterogeneous geometry is how to combine an effective geometry processing method with MOC. Most of the existing MOC codes describe the geometry by lines and arcs with extensive input data, such as circles, ellipses, regular polygons and combination of them. Thus they have difficulty in geometry modeling, background meshing and ray tracing for complicated geometry domains. In this study, amore » new idea making use of a CAD solid modeler MCAM which is a CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport developed by FDS Team in China was introduced for geometry modeling and ray tracing of particle transport to remove these geometrical limitations mentioned above. The diamond-difference scheme was applied to MOC to reduce the spatial discretization error of the flat flux approximation in theory. Based on MCAM and MOC, a new MOC code was developed and integrated into SuperMC system, which is a Super Multi-function Computational system for neutronics and radiation simulation. The numerical testing results demonstrated the feasibility and effectiveness of the new idea for geometry treatment in SuperMC. (authors)« less
SABRINA - an interactive geometry modeler for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.; Murphy, J.
One of the most difficult tasks when analyzing a complex three-dimensional system with Monte Carlo is geometry model development. SABRINA attempts to make the modeling process more user-friendly and less of an obstacle. It accepts both combinatorial solid bodies and MCNP surfaces and produces MCNP cells. The model development process in SABRINA is highly interactive and gives the user immediate feedback on errors. Users can view their geometry from arbitrary perspectives while the model is under development and interactively find and correct modeling errors. An example of a SABRINA display is shown. It represents a complex three-dimensional shape.
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
Traveltime inversion and error analysis for layered anisotropy
NASA Astrophysics Data System (ADS)
Jiang, Fan; Zhou, Hua-wei
2011-02-01
While tilted transverse isotropy (TTI) is a good approximation of the velocity structure for many dipping and fractured strata, it is still challenging to estimate anisotropic depth models even when the tilted angle is known. With the assumption of weak anisotropy, we present a TTI traveltime inversion approach for models consisting of several thickness-varying layers where the anisotropic parameters are constant for each layer. For each model layer the inversion variables consist of the anisotropic parameters ɛ and δ, the tilted angle φ of its symmetry axis, layer velocity along the symmetry axis, and thickness variation of the layer. Using this method and synthetic data, we evaluate the effects of errors in some of the model parameters on the inverted values of the other parameters in crosswell and Vertical Seismic Profile (VSP) acquisition geometry. The analyses show that the errors in the layer symmetry axes sensitively affect the inverted values of other parameters, especially δ. However, the impact of errors in δ on the inversion of other parameters is much less than the impact on δ from the errors in other parameters. Hence, a practical strategy is first to invert for the most error-tolerant parameter layer velocity, then progressively invert for ɛ in crosswell geometry or δ in VSP geometry.
Maradzike, Elvis; Gidofalvi, Gergely; Turney, Justin M; Schaefer, Henry F; DePrince, A Eugene
2017-09-12
Analytic energy gradients are presented for a variational two-electron reduced-density-matrix (2-RDM)-driven complete active space self-consistent field (CASSCF) method. The active-space 2-RDM is determined using a semidefinite programing (SDP) algorithm built upon an augmented Lagrangian formalism. Expressions for analytic gradients are simplified by the fact that the Lagrangian is stationary with respect to variations in both the primal and the dual solutions to the SDP problem. Orbital response contributions to the gradient are identical to those that arise in conventional CASSCF methods in which the electronic structure of the active space is described by a full configuration interaction (CI) wave function. We explore the relative performance of variational 2-RDM (v2RDM)- and CI-driven CASSCF for the equilibrium geometries of 20 small molecules. When enforcing two-particle N-representability conditions, full-valence v2RDM-CASSCF-optimized bond lengths display a mean unsigned error of 0.0060 Å and a maximum unsigned error of 0.0265 Å, relative to those obtained from full-valence CI-CASSCF. When enforcing partial three-particle N-representability conditions, the mean and maximum unsigned errors are reduced to only 0.0006 and 0.0054 Å, respectively. For these same molecules, full-valence v2RDM-CASSCF bond lengths computed in the cc-pVQZ basis set deviate from experimentally determined ones on average by 0.017 and 0.011 Å when enforcing two- and three-particle conditions, respectively, whereas CI-CASSCF displays an average deviation of 0.010 Å. The v2RDM-CASSCF approach with two-particle conditions is also applied to the equilibrium geometry of pentacene; optimized bond lengths deviate from those derived from experiment, on average, by 0.015 Å when using a cc-pVDZ basis set and a (22e,22o) active space.
Three-dimensional ray-tracing model for the study of advanced refractive errors in keratoconus.
Schedin, Staffan; Hallberg, Per; Behndig, Anders
2016-01-20
We propose a numerical three-dimensional (3D) ray-tracing model for the analysis of advanced corneal refractive errors. The 3D modeling was based on measured corneal elevation data by means of Scheimpflug photography. A mathematical description of the measured corneal surfaces from a keratoconus (KC) patient was used for the 3D ray tracing, based on Snell's law of refraction. A model of a commercial intraocular lens (IOL) was included in the analysis. By modifying the posterior IOL surface, it was shown that the imaging quality could be significantly improved. The RMS values were reduced by approximately 50% close to the retina, both for on- and off-axis geometries. The 3D ray-tracing model can constitute a basis for simulation of customized IOLs that are able to correct the advanced, irregular refractive errors in KC.
NASA Astrophysics Data System (ADS)
Schmitz, Gunnar; Christiansen, Ove
2018-06-01
We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.
Computerized Design of Low-noise Face-milled Spiral Bevel Gears
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Zhang, YI; Handschuh, Robert F.
1994-01-01
An advanced design methodology is proposed for the face-milled spiral bevel gears with modified tooth surface geometry that provides a reduced level of noise and has a stabilized bearing contact. The approach is based on the local synthesis of the gear drive that provides the 'best' machine-tool settings. The theoretical aspects of the local synthesis approach are based on the application of a predesigned parabolic function for absorption of undesirable transmission errors caused by misalignment and the direct relations between principal curvatures and directions for mating surfaces. The meshing and contact of the gear drive is synthesized and analyzed by a computer program. The generation of gears with the proposed geometry design can be accomplished by application of existing equipment. A numerical example that illustrates the proposed theory is presented.
Computerized design of low-noise face-milled spiral bevel gears
NASA Astrophysics Data System (ADS)
Litvin, Faydor L.; Zhang, Yi; Handschuh, Robert F.
1994-08-01
An advanced design methodology is proposed for the face-milled spiral bevel gears with modified tooth surface geometry that provides a reduced level of noise and has a stabilized bearing contact. The approach is based on the local synthesis of the gear drive that provides the 'best' machine-tool settings. The theoretical aspects of the local synthesis approach are based on the application of a predesigned parabolic function for absorption of undesirable transmission errors caused by misalignment and the direct relations between principal curvatures and directions for mating surfaces. The meshing and contact of the gear drive is synthesized and analyzed by a computer program. The generation of gears with the proposed geometry design can be accomplished by application of existing equipment. A numerical example that illustrates the proposed theory is presented.
Nondimensional parameter for conformal grinding: combining machine and process parameters
NASA Astrophysics Data System (ADS)
Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.
1999-11-01
Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.
NASA Astrophysics Data System (ADS)
Wu, Jie; Yan, Quan-sheng; Li, Jian; Hu, Min-yi
2016-04-01
In bridge construction, geometry control is critical to ensure that the final constructed bridge has the consistent shape as design. A common method is by predicting the deflections of the bridge during each construction phase through the associated finite element models. Therefore, the cambers of the bridge during different construction phases can be determined beforehand. These finite element models are mostly based on the design drawings and nominal material properties. However, the accuracy of these bridge models can be large due to significant uncertainties of the actual properties of the materials used in construction. Therefore, the predicted cambers may not be accurate to ensure agreement of bridge geometry with design, especially for long-span bridges. In this paper, an improved geometry control method is described, which incorporates finite element (FE) model updating during the construction process based on measured bridge deflections. A method based on the Kriging model and Latin hypercube sampling is proposed to perform the FE model updating due to its simplicity and efficiency. The proposed method has been applied to a long-span continuous girder concrete bridge during its construction. Results show that the method is effective in reducing construction error and ensuring the accuracy of the geometry of the final constructed bridge.
NASA Astrophysics Data System (ADS)
Moreno, R.; Bazán, A. M.
2017-10-01
The main purpose of this work is to study improvements to the learning method of technical drawing and descriptive geometry through exercises with traditional techniques that are usually solved manually by applying automated processes assisted by high-level CAD templates (HLCts). Given that an exercise with traditional procedures can be solved, detailed step by step in technical drawing and descriptive geometry manuals, CAD applications allow us to do the same and generalize it later, incorporating references. Traditional teachings have become obsolete and current curricula have been relegated. However, they can be applied in certain automation processes. The use of geometric references (using variables in script languages) and their incorporation into HLCts allows the automation of drawing processes. Instead of repeatedly creating similar exercises or modifying data in the same exercises, users should be able to use HLCts to generate future modifications of these exercises. This paper introduces the automation process when generating exercises based on CAD script files, aided by parametric geometry calculation tools. The proposed method allows us to design new exercises without user intervention. The integration of CAD, mathematics, and descriptive geometry facilitates their joint learning. Automation in the generation of exercises not only saves time but also increases the quality of the statements and reduces the possibility of human error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
Mathematics skills in good readers with hydrocephalus.
Barnes, Marcia A; Pengelly, Sarah; Dennis, Maureen; Wilkinson, Margaret; Rogers, Tracey; Faulkner, Heather
2002-01-01
Children with hydrocephalus have poor math skills. We investigated the nature of their arithmetic computation errors by comparing written subtraction errors in good readers with hydrocephalus, typically developing good readers of the same age, and younger children matched for math level to the children with hydrocephalus. Children with hydrocephalus made more procedural errors (although not more fact retrieval or visual-spatial errors) than age-matched controls; they made the same number of procedural errors as younger, math-level matched children. We also investigated a broad range of math abilities, and found that children with hydrocephalus performed more poorly than age-matched controls on tests of geometry and applied math skills such as estimation and problem solving. Computation deficits in children with hydrocephalus reflect delayed development of procedural knowledge. Problems in specific math domains such as geometry and applied math, were associated with deficits in constituent cognitive skills such as visual spatial competence, memory, and general knowledge.
Cui, Xiao-Yan; Huo, Zhong-Gang; Xin, Zhong-Hua; Tian, Xiao; Zhang, Xiao-Dong
2013-07-01
Three-dimensional (3D) copying of artificial ears and pistol printing are pushing laser three-dimensional copying technique to a new page. Laser three-dimensional scanning is a fresh field in laser application, and plays an irreplaceable part in three-dimensional copying. Its accuracy is the highest among all present copying techniques. Reproducibility degree marks the agreement of copied object with the original object on geometry, being the most important index property in laser three-dimensional copying technique. In the present paper, the error of laser three-dimensional copying was analyzed. The conclusion is that the data processing to the point cloud of laser scanning is the key technique to reduce the error and increase the reproducibility degree. The main innovation of this paper is as follows. On the basis of traditional ant colony optimization, rational ant colony optimization algorithm proposed by the author was applied to the laser three-dimensional copying as a new algorithm, and was put into practice. Compared with customary algorithm, rational ant colony optimization algorithm shows distinct advantages in data processing of laser three-dimensional copying, reducing the error and increasing the reproducibility degree of the copy.
3DHZETRN: Inhomogeneous Geometry Issues
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.
2017-01-01
Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.
NASA Astrophysics Data System (ADS)
Poirier, Vincent
Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.
Generation and Computerized Simulation of Meshing and Contact of Modified Involute Helical Gears
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Chen, Ningxin; Lu, Jian
1995-01-01
The design and generation of modified involute helical gears that have a localized and stable bearing contact, and reduced noise and vibration characteristics are described. The localization of the bearing contact is achieved by the mismatch of the two generating surfaces that are used for generation of the pinion and the gear. The reduction of noise and vibration will be achieved by application of a parabolic function of transmission errors that is able to absorb the almost linear function of transmission errors caused by gear misalignment. The meshing and contact of misaligned gear drives can be analyzed by application of computer programs that have been developed. The computations confirmed the effectiveness of the proposed modification of the gear geometry. A numerical example that illustrates the developed theory is provided.
NASA Astrophysics Data System (ADS)
Muguet, Francis F.; Robinson, G. Wilse; Bassez-Muguet, M. Palmyre
1995-03-01
With the help of a new scheme to correct for the basis set superposition error (BSSE), we find that an eclipsed nonlinear geometry becomes energetically favored over the eclipsed linear hydrogen-bonded geometry. From a normal mode analysis of the potential energy surface (PES) in the vicinity of the nonlinear geometry, we suggest that several dynamical interchange pathways must be taken into account. The minimal molecular symmetry group to be considered should be the double group of G36, but still larger multiple groups may be required. An interpretation of experimental vibration-rotation-tunneling (VRT) data in terms of the G144 group, which implies monomer inversions, may not be the only alternative. It appears that group theoretical considerations alone are insufficient for understanding the complex VRT dynamics of the ammonia dimer.
Atmospheric refraction effects on baseline error in satellite laser ranging systems
NASA Technical Reports Server (NTRS)
Im, K. E.; Gardner, C. S.
1982-01-01
Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.
Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry
Meyer, Andrew J.; Patten, Carolynn
2017-01-01
Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with appropriate experimental data, joint moment predictions for walking generated by an EMG-driven model can be improved significantly when automated adjustment of musculoskeletal geometry is included in the model calibration process. PMID:28700708
A highly accurate ab initio potential energy surface for methane.
Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter
2016-09-14
A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.
Mission strategy for cometary exploration in the 1980's
NASA Technical Reports Server (NTRS)
Farquhar, R. W.
1976-01-01
A specific plan for a sequence of cometary intercept missions in the 1980's is reported. Each mission is described in detail and the supporting role of ground based cometary observations is included. Only three launches are required in the proposed mission sequence for six cometary encounters with comets Encke, Giacobini-Zinner, Borrelly and Halley. Cometary ephemerics errors are reduced to very small values because of a favorable earth-comet orbital geometry for Encke 1980, and excellent earth based sighting conditions exist for the entire 1985 mission set.
An Instrument for Measuring Performance in Geometry Based on the Van Hiele Model
ERIC Educational Resources Information Center
Sánchez-García, Ana B.; Cabello, Ana Belén
2016-01-01
In this paper we present the process of constructing a test for assessing student performance in geometry corresponding to the first year of Secondary Education. The main goal was to detect student errors in the understanding of geometry in order to develop a proposal according to the Van Hiele teaching model, explained in this paper. Our research…
Fast mix table construction for material discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S. R.
2013-07-01
An effective hybrid Monte Carlo-deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a 'mix table,' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mixmore » table in O(number of voxels x log number of mixtures) time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation. (authors)« less
Fast Mix Table Construction for Material Discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Seth R
2013-01-01
An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less
Gravity field recovery in the framework of a Geodesy and Time Reference in Space (GETRIS)
NASA Astrophysics Data System (ADS)
Hauk, Markus; Schlicht, Anja; Pail, Roland; Murböck, Michael
2017-04-01
The study ;Geodesy and Time Reference in Space; (GETRIS), funded by European Space Agency (ESA), evaluates the potential and opportunities coming along with a global space-borne infrastructure for data transfer, clock synchronization and ranging. Gravity field recovery could be one of the first beneficiary applications of such an infrastructure. This paper analyzes and evaluates the two-way high-low satellite-to-satellite-tracking as a novel method and as a long-term perspective for the determination of the Earth's gravitational field, using it as a synergy of one-way high-low combined with low-low satellite-to-satellite-tracking, in order to generate adequate de-aliasing products. First planned as a constellation of geostationary satellites, it turned out, that an integration of European Union Global Navigation Satellite System (Galileo) satellites (equipped with inter-Galileo links) into a Geostationary Earth Orbit (GEO) constellation would extend the capability of such a mission constellation remarkably. We report about simulations of different Galileo and Low Earth Orbiter (LEO) satellite constellations, computed using time variable geophysical background models, to determine temporal changes in the Earth's gravitational field. Our work aims at an error analysis of this new satellite/instrument scenario by investigating the impact of different error sources. Compared to a low-low satellite-to-satellite-tracking mission, results show reduced temporal aliasing errors due to a more isotropic error behavior caused by an improved observation geometry, predominantly in near-radial direction within the inter-satellite-links, as well as the potential of an improved gravity recovery with higher spatial and temporal resolution. The major error contributors of temporal gravity retrieval are aliasing errors due to undersampling of high frequency signals (mainly atmosphere, ocean and ocean tides). In this context, we investigate adequate methods to reduce these errors. We vary the number of Galileo and LEO satellites and show reduced errors in the temporal gravity field solutions for this enhanced inter-satellite-links. Based on the GETRIS infrastructure, the multiplicity of satellites enables co-estimating short-period long-wavelength gravity field signals, indicating it as powerful method for non-tidal aliasing reduction.
Accuracy assessment of 3D bone reconstructions using CT: an intro comparison.
Lalone, Emily A; Willing, Ryan T; Shannon, Hannah L; King, Graham J W; Johnson, James A
2015-08-01
Computed tomography provides high contrast imaging of the joint anatomy and is used routinely to reconstruct 3D models of the osseous and cartilage geometry (CT arthrography) for use in the design of orthopedic implants, for computer assisted surgeries and computational dynamic and structural analysis. The objective of this study was to assess the accuracy of bone and cartilage surface model reconstructions by comparing reconstructed geometries with bone digitizations obtained using an optical tracking system. Bone surface digitizations obtained in this study determined the ground truth measure for the underlying geometry. We evaluated the use of a commercially available reconstruction technique using clinical CT scanning protocols using the elbow joint as an example of a surface with complex geometry. To assess the accuracies of the reconstructed models (8 fresh frozen cadaveric specimens) against the ground truth bony digitization-as defined by this study-proximity mapping was used to calculate residual error. The overall mean error was less than 0.4 mm in the cortical region and 0.3 mm in the subchondral region of the bone. Similarly creating 3D cartilage surface models from CT scans using air contrast had a mean error of less than 0.3 mm. Results from this study indicate that clinical CT scanning protocols and commonly used and commercially available reconstruction algorithms can create models which accurately represent the true geometry. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Ultrasonic density measurement cell design and simulation of non-ideal effects.
Higuti, Ricardo Tokio; Buiochi, Flávio; Adamowski, Júlio Cezar; de Espinosa, Francisco Montero
2006-07-01
This paper presents a theoretical analysis of a density measurement cell using an unidimensional model composed by acoustic and electroacoustic transmission lines in order to simulate non-ideal effects. The model is implemented using matrix operations, and is used to design the cell considering its geometry, materials used in sensor assembly, range of liquid sample properties and signal analysis techniques. The sensor performance in non-ideal conditions is studied, considering the thicknesses of adhesive and metallization layers, and the effect of residue of liquid sample which can impregnate on the sample chamber surfaces. These layers are taken into account in the model, and their effects are compensated to reduce the error on density measurement. The results show the contribution of residue layer thickness to density error and its behavior when two signal analysis methods are used.
Modelling the penumbra in Computed Tomography1
Kueh, Audrey; Warnett, Jason M.; Gibbons, Gregory J.; Brettschneider, Julia; Nichols, Thomas E.; Williams, Mark A.; Kendall, Wilfrid S.
2016-01-01
BACKGROUND: In computed tomography (CT), the spot geometry is one of the main sources of error in CT images. Since X-rays do not arise from a point source, artefacts are produced. In particular there is a penumbra effect, leading to poorly defined edges within a reconstructed volume. Penumbra models can be simulated given a fixed spot geometry and the known experimental setup. OBJECTIVE: This paper proposes to use a penumbra model, derived from Beer’s law, both to confirm spot geometry from penumbra data, and to quantify blurring in the image. METHODS: Two models for the spot geometry are considered; one consists of a single Gaussian spot, the other is a mixture model consisting of a Gaussian spot together with a larger uniform spot. RESULTS: The model consisting of a single Gaussian spot has a poor fit at the boundary. The mixture model (which adds a larger uniform spot) exhibits a much improved fit. The parameters corresponding to the uniform spot are similar across all powers, and further experiments suggest that the uniform spot produces only soft X-rays of relatively low-energy. CONCLUSIONS: Thus, the precision of radiographs can be estimated from the penumbra effect in the image. The use of a thin copper filter reduces the size of the effective penumbra. PMID:27232198
CTF Preprocessor User's Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, Maria; Salko, Robert K.
2016-05-26
This document describes how a user should go about using the CTF pre- processor tool to create an input deck for modeling rod-bundle geometry in CTF. The tool was designed to generate input decks in a quick and less error-prone manner for CTF. The pre-processor is a completely independent utility, written in Fortran, that takes a reduced amount of input from the user. The information that the user must supply is basic information on bundle geometry, such as rod pitch, clad thickness, and axial location of spacer grids--the pre-processor takes this basic information and determines channel placement and connection informationmore » to be written to the input deck, which is the most time-consuming and error-prone segment of creating a deck. Creation of the model is also more intuitive, as the user can specify assembly and water-tube placement using visual maps instead of having to place them by determining channel/channel and rod/channel connections. As an example of the benefit of the pre-processor, a quarter-core model that contains 500,000 scalar-mesh cells was read into CTF from an input deck containing 200,000 lines of data. This 200,000 line input deck was produced automatically from a set of pre-processor decks that contained only 300 lines of data.« less
NASA Technical Reports Server (NTRS)
Trosin, J.
1985-01-01
Use of the Display AButments (DAB) which plots PAN AIR geometries is presented. The DAB program creates hidden line displays of PAN AIR geometries and labels specified geometry components, such as abutments, networks, and network edges. It is used to alleviate the very time consuming and error prone abutment list checking phase of developing a valid PAN AIR geometry, and therefore represents a valuable tool for debugging complex PAN AIR geometry definitions. DAB is written in FORTRAN 77 and runs on a Digital Equipment Corporation VAX 11/780 under VMS. It utilizes a special color version of the SKETCH hidden line analysis routine.
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
Parrett, Charles; Omang, R.J.; Hull, J.A.
1983-01-01
Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)
Diamond-anvil cell for radial x-ray diffraction.
Chesnut, G N; Schiferl, D; Streetman, B D; Anderson, W W
2006-06-28
We have designed a new diamond-anvil cell capable of radial x-ray diffraction to pressures of a few hundred GPa. The diffraction geometry allows access to multiple angles of Ψ, which is the angle between each reciprocal lattice vector g(hkl) and the compression axis of the cell. At the 'magic angle', Ψ≈54.7°, the effects of deviatoric stresses on the interplanar spacings, d(hkl), are significantly reduced. Because the systematic errors, which are different for each d(hkl), are significantly reduced, the crystal structures and the derived equations of state can be determined reliably. At other values of Ψ, the effects of deviatoric stresses on the diffraction pattern could eventually be used to determine elastic constants.
An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors
NASA Technical Reports Server (NTRS)
Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg
2011-01-01
The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.
Influence of incident angle on the decoding in laser polarization encoding guidance
NASA Astrophysics Data System (ADS)
Zhou, Muchun; Chen, Yanru; Zhao, Qi; Xin, Yu; Wen, Hongyuan
2009-07-01
Dynamic detection of polarization states is very important for laser polarization coding guidance systems. In this paper, a set of dynamic polarization decoding and detection system used in laser polarization coding guidance was designed. Detection process of the normal incident polarized light is analyzed with Jones Matrix; the system can effectively detect changes in polarization. Influence of non-normal incident light on performance of polarization decoding and detection system is studied; analysis showed that changes in incident angle will have a negative impact on measure results, the non-normal incident influence is mainly caused by second-order birefringence and polarization sensitivity effect generated in the phase delay and beam splitter prism. Combined with Fresnel formula, decoding errors of linearly polarized light, elliptically polarized light and circularly polarized light with different incident angles into the detector are calculated respectively, the results show that the decoding errors increase with increase of incident angle. Decoding errors have relations with geometry parameters, material refractive index of wave plate, polarization beam splitting prism. Decoding error can be reduced by using thin low-order wave-plate. Simulation of detection of polarized light with different incident angle confirmed the corresponding conclusions.
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
NASA Astrophysics Data System (ADS)
Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.
2017-12-01
In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.
Absolute vs. relative error characterization of electromagnetic tracking accuracy
NASA Astrophysics Data System (ADS)
Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet
2010-02-01
Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of localization errors are clustered and dynamically displayed as separate confidence zones within the operating region of the EM tracker space.
Estimation of attitude sensor timetag biases
NASA Technical Reports Server (NTRS)
Sedlak, J.
1995-01-01
This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.
Computer Generated Hologram System for Wavefront Measurement System Calibration
NASA Technical Reports Server (NTRS)
Olczak, Gene
2011-01-01
Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.
Impact of a counter-rotating planetary rotation system on thin-film thickness and uniformity
Oliver, J. B.
2017-06-12
Planetary rotation systems incorporating forward- and counter-rotating planets are used as a means of increasing coating-system capacity for large oblong substrates. Comparisons of planetary motion for the two types of rotating systems are presented based on point tracking for multiple revolutions, as well as comparisons of quantitative thickness and uniformity. Counter-rotation system geometry is shown to result in differences in thin-film thickness relative to standard planetary rotation for precision optical coatings. As a result, this systematic error in thin-film thickness will reduce deposition yields for sensitive coating designs.
Impact of a counter-rotating planetary rotation system on thin-film thickness and uniformity.
Oliver, J B
2017-06-20
Planetary rotation systems incorporating forward- and counter-rotating planets are used as a means of increasing coating-system capacity for large oblong substrates. Comparisons of planetary motion for the two types of rotating systems are presented based on point tracking for multiple revolutions as well as comparisons of quantitative thickness and uniformity. Counter-rotation system geometry is shown to result in differences in thin-film thickness relative to standard planetary rotation for precision optical coatings. This systematic error in thin-film thickness will reduce deposition yields for sensitive coating designs.
Impact of a counter-rotating planetary rotation system on thin-film thickness and uniformity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, J. B.
Planetary rotation systems incorporating forward- and counter-rotating planets are used as a means of increasing coating-system capacity for large oblong substrates. Comparisons of planetary motion for the two types of rotating systems are presented based on point tracking for multiple revolutions, as well as comparisons of quantitative thickness and uniformity. Counter-rotation system geometry is shown to result in differences in thin-film thickness relative to standard planetary rotation for precision optical coatings. As a result, this systematic error in thin-film thickness will reduce deposition yields for sensitive coating designs.
New analysis strategies for micro aspheric lens metrology
NASA Astrophysics Data System (ADS)
Gugsa, Solomon Abebe
Effective characterization of an aspheric micro lens is critical for understanding and improving processing in micro-optic manufacturing. Since most microlenses are plano-convex, where the convex geometry is a conic surface, current practice is often limited to obtaining an estimate of the lens conic constant, which average out the surface geometry that departs from an exact conic surface and any addition surface irregularities. We have developed a comprehensive approach of estimating the best fit conic and its uncertainty, and in addition propose an alternative analysis that focuses on surface errors rather than best-fit conic constant. We describe our new analysis strategy based on the two most dominant micro lens metrology methods in use today, namely, scanning white light interferometry (SWLI) and phase shifting interferometry (PSI). We estimate several parameters from the measurement. The major uncertainty contributors for SWLI are the estimates of base radius of curvature, the aperture of the lens, the sag of the lens, noise in the measurement, and the center of the lens. In the case of PSI the dominant uncertainty contributors are noise in the measurement, the radius of curvature, and the aperture. Our best-fit conic procedure uses least squares minimization to extract a best-fit conic value, which is then subjected to a Monte Carlo analysis to capture combined uncertainty. In our surface errors analysis procedure, we consider the surface errors as the difference between the measured geometry and the best-fit conic surface or as the difference between the measured geometry and the design specification for the lens. We focus on a Zernike polynomial description of the surface error, and again a Monte Carlo analysis is used to estimate a combined uncertainty, which in this case is an uncertainty for each Zernike coefficient. Our approach also allows us to investigate the effect of individual uncertainty parameters and measurement noise on both the best-fit conic constant analysis and the surface errors analysis, and compare the individual contributions to the overall uncertainty.
Concentration solar power optimization system and method of using the same
Andraka, Charles E
2014-03-18
A system and method for optimizing at least one mirror of at least one CSP system is provided. The system has a screen for displaying light patterns for reflection by the mirror, a camera for receiving a reflection of the light patterns from the mirror, and a solar characterization tool. The solar characterization tool has a characterizing unit for determining at least one mirror parameter of the mirror based on an initial position of the camera and the screen, and a refinement unit for refining the determined parameter(s) based on an adjusted position of the camera and screen whereby the mirror is characterized. The system may also be provided with a solar alignment tool for comparing at least one mirror parameter of the mirror to a design geometry whereby an alignment error is defined, and at least one alignment unit for adjusting the mirror to reduce the alignment error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM- FOM, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein
2014-11-15
In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships betweenmore » heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.« less
Agujetas, R; González-Fernández, M R; Nogales-Asensio, J M; Montanero, J M
2018-05-30
Fractional flow reverse (FFR) is the gold standard assessment of the hemodynamic significance of coronary stenoses. However, it requires the catheterization of the coronary artery to determine the pressure waveforms proximal and distal to the stenosis. On the contrary, computational fluid dynamics enables the calculation of the FFR value from relatively non-invasive computed tomography angiography (CTA). We analyze the flow across idealized highly-eccentric coronary stenoses by solving the Navier-Stokes equations. We examine the influence of several aspects (approximations) of the simulation method on the calculation of the FFR value. We study the effects on the FFR value of errors made in the segmentation of clinical images. For this purpose, we compare the FFR value for the nominal geometry with that calculated for other shapes that slightly deviate from that geometry. This analysis is conducted for a range of stenosis severities and different inlet velocity and pressure waveforms. The errors made in assuming a uniform velocity profile in front of the stenosis, as well as those due to the Newtonian and laminar approximations, are negligible for stenosis severities leading to FFR values around the threshold 0.8. The limited resolution of the stenosis geometry reconstruction is the major source of error when predicting the FFR value. Both systematic errors in the contour detection of just 1-pixel size in the CTA images and a low-quality representation of the stenosis surface (coarse faceted geometry) may yield wrong outcomes of the FFR assessment for an important set of eccentric stenoses. On the contrary, the spatial resolution of images acquired with optical coherence tomography may be sufficient to ensure accurate predictions for the FFR value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less
Fictitious Domain Methods for Fracture Models in Elasticity.
NASA Astrophysics Data System (ADS)
Court, S.; Bodart, O.; Cayol, V.; Koko, J.
2014-12-01
As surface displacements depend non linearly on sources location and shape, simplifying assumptions are generally required to reduce computation time when inverting geodetic data. We present a generic Finite Element Method designed for pressurized or sheared cracks inside a linear elastic medium. A fictitious domain method is used to take the crack into account independently of the mesh. Besides the possibility of considering heterogeneous media, the approach permits the evolution of the crack through time or more generally through iterations: The goal is to change the less things we need when the crack geometry is modified; In particular no re-meshing is required (the boundary conditions at the level of the crack are imposed by Lagrange multipliers), leading to a gain of computation time and resources with respect to classic finite element methods. This method is also robust with respect to the geometry, since we expect to observe the same behavior whatever the shape and the position of the crack. We present numerical experiments which highlight the accuracy of our method (using convergence curves), the optimality of errors, and the robustness with respect to the geometry (with computation of errors on some quantities for all kind of geometric configurations). We will also provide 2D benchmark tests. The method is then applied to Piton de la Fournaise volcano, considering a pressurized crack - inside a 3-dimensional domain - and the corresponding computation time and accuracy are compared with results from a mixed Boundary element method. In order to determine the crack geometrical characteristics, and pressure, inversions are performed combining fictitious domain computations with a near neighborhood algorithm. Performances are compared with those obtained combining a mixed boundary element method with the same inversion algorithm.
NASA Astrophysics Data System (ADS)
Al-Mayah, Adil; Moseley, Joanne; Velec, Mike; Brock, Kristy
2011-08-01
Both accuracy and efficiency are critical for the implementation of biomechanical model-based deformable registration in clinical practice. The focus of this investigation is to evaluate the potential of improving the efficiency of the deformable image registration of the human lungs without loss of accuracy. Three-dimensional finite element models have been developed using image data of 14 lung cancer patients. Each model consists of two lungs, tumor and external body. Sliding of the lungs inside the chest cavity is modeled using a frictionless surface-based contact model. The effect of the type of element, finite deformation and elasticity on the accuracy and computing time is investigated. Linear and quadrilateral tetrahedral elements are used with linear and nonlinear geometric analysis. Two types of material properties are applied namely: elastic and hyperelastic. The accuracy of each of the four models is examined using a number of anatomical landmarks representing the vessels bifurcation points distributed across the lungs. The registration error is not significantly affected by the element type or linearity of analysis, with an average vector error of around 2.8 mm. The displacement differences between linear and nonlinear analysis methods are calculated for all lungs nodes and a maximum value of 3.6 mm is found in one of the nodes near the entrance of the bronchial tree into the lungs. The 95 percentile of displacement difference ranges between 0.4 and 0.8 mm. However, the time required for the analysis is reduced from 95 min in the quadratic elements nonlinear geometry model to 3.4 min in the linear element linear geometry model. Therefore using linear tetrahedral elements with linear elastic materials and linear geometry is preferable for modeling the breathing motion of lungs for image-guided radiotherapy applications.
A study of modelling simplifications in ground vibration predictions for railway traffic at grade
NASA Astrophysics Data System (ADS)
Germonpré, M.; Degrande, G.; Lombaert, G.
2017-10-01
Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.
Fletcher, Timothy L; Popelier, Paul L A
2016-06-14
A machine learning method called kriging is applied to the set of all 20 naturally occurring amino acids. Kriging models are built that predict electrostatic multipole moments for all topological atoms in any amino acid based on molecular geometry only. These models then predict molecular electrostatic interaction energies. On the basis of 200 unseen test geometries for each amino acid, no amino acid shows a mean prediction error above 5.3 kJ mol(-1), while the lowest error observed is 2.8 kJ mol(-1). The mean error across the entire set is only 4.2 kJ mol(-1) (or 1 kcal mol(-1)). Charged systems are created by protonating or deprotonating selected amino acids, and these show no significant deviation in prediction error over their neutral counterparts. Similarly, the proposed methodology can also handle amino acids with aromatic side chains, without the need for modification. Thus, we present a generic method capable of accurately capturing multipolar polarizable electrostatics in amino acids.
Effects of line fiducial parameters and beamforming on ultrasound calibration
Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Peters, Terry M.; Chen, Elvis C. S.
2017-01-01
Abstract. Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures. PMID:28331886
Effects of line fiducial parameters and beamforming on ultrasound calibration.
Ameri, Golafsoun; Baxter, John S H; McLeod, A Jonathan; Peters, Terry M; Chen, Elvis C S
2017-01-01
Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures.
Junwei Ma; Han Yuan; Sunderam, Sridhar; Besio, Walter; Lei Ding
2017-07-01
Neural activity inside the human brain generate electrical signals that can be detected on the scalp. Electroencephalograph (EEG) is one of the most widely utilized techniques helping physicians and researchers to diagnose and understand various brain diseases. Due to its nature, EEG signals have very high temporal resolution but poor spatial resolution. To achieve higher spatial resolution, a novel tri-polar concentric ring electrode (TCRE) has been developed to directly measure Surface Laplacian (SL). The objective of the present study is to accurately calculate SL for TCRE based on a realistic geometry head model. A locally dense mesh was proposed to represent the head surface, where the local dense parts were to match the small structural components in TCRE. Other areas without dense mesh were used for the purpose of reducing computational load. We conducted computer simulations to evaluate the performance of the proposed mesh and evaluated possible numerical errors as compared with a low-density model. Finally, with achieved accuracy, we presented the computed forward lead field of SL for TCRE for the first time in a realistic geometry head model and demonstrated that it has better spatial resolution than computed SL from classic EEG recordings.
Application of a low order panel method to complex three-dimensional internal flow problems
NASA Technical Reports Server (NTRS)
Ashby, D. L.; Sandlin, D. R.
1986-01-01
An evaluation of the ability of a low order panel method to predict complex three-dimensional internal flow fields was made. The computer code VSAERO was used as a basis for the evaluation. Guidelines for modeling internal flow geometries were determined and the effects of varying the boundary conditions and the use of numerical approximations on the solutions accuracy were studied. Several test cases were run and the results were compared with theoretical or experimental results. Modeling an internal flow geometry as a closed box with normal velocities specified on an inlet and exit face provided accurate results and gave the user control over the boundary conditions. The values of the boundary conditions greatly influenced the amount of leakage an internal flow geometry suffered and could be adjusted to eliminate leakage. The use of the far-field approximation to reduce computation time influenced the accuracy of a solution and was coupled with the values of the boundary conditions needed to eliminate leakage. The error induced in the influence coefficients by using the far-field approximation was found to be dependent on the type of influence coefficient, the far-field radius, and the aspect ratio of the panels.
Fully implicit moving mesh adaptive algorithm
NASA Astrophysics Data System (ADS)
Serazio, C.; Chacon, L.; Lapenta, G.
2006-10-01
In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)
Attitude Determination Error Analysis System (ADEAS) mathematical specifications document
NASA Technical Reports Server (NTRS)
Nicholson, Mark; Markley, F.; Seidewitz, E.
1988-01-01
The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.
Global Warming Estimation from MSU: Correction for Drift and Calibration Errors
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)
2000-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the entire error is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by about 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+-) 0.04 K/decade during 1980 to 1998.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Current measurement by Faraday effect on GEPOPU
NASA Astrophysics Data System (ADS)
N, Correa; H, Chuaqui; E, Wyndham; F, Veloso; J, Valenzuela; M, Favre; H, Bhuyan
2014-05-01
The design and calibration of an optical current sensor using BK7 glass is presented. The current sensor is based on the polarization rotation by Faraday effect. GEPOPU is a pulsed power generator, double transit time 120ns, 1.5 Ohm impedance, coaxial geometry, where Z pinch experiment are performed. The measurements were performed at the Optics and Plasma Physics Laboratory of Pontificia Universidad Catolica de Chile. The verdet constant for two different optical materials was obtained using He-Ne laser. The values obtained are within the experimental error bars of measurements published in the literature (less than 15% difference). Two different sensor geometries were tried. We present the preliminary results for one of the geometries. The values obtained for the current agree within the measurement error with those obtained by means of a Spice simulation of the generator. Signal traces obtained are completely noise free.
ERIC Educational Resources Information Center
Roll, Ido; Aleven, Vincent; McLaren, Bruce M.; Koedinger, Kenneth R.
2011-01-01
The present research investigated whether immediate metacognitive feedback on students' help-seeking errors can help students acquire better help-seeking skills. The Help Tutor, an intelligent tutor agent for help seeking, was integrated into a commercial tutoring system for geometry, the Geometry Cognitive Tutor. Study 1, with 58 students, found…
The Influence of Gantry Geometry on Aliasing and Other Geometry Dependent Errors
NASA Astrophysics Data System (ADS)
Joseph, Peter M.
1980-06-01
At least three gantry geometries are widely used in medical CT scanners: (1) rotate-translate, (2) rotating detectors, (3) stationary detectors. There are significant geometrical differences between these designs, especially regarding (a) the region of space scanned by any given detector and (b) the sample density of rays which scan the patient. It is imperative to distinguish between "views" and "rays" in analyzing this situation. In particular, views are defined by the x-ray source in type 2 and by the detector in type 3 gantries. It is known that ray dependent errors are generally much more important than view dependent errors. It is shown that spatial resolution is primarily limited by the spacing between rays in any view, while the number of ray samples per beam width determines the extent of aliasing artifacts. Rotating detector gantries are especially susceptible to aliasing effects. It is shown that aliasing effects can distort the point spread function in a way that is highly dependent on the position of the point in the scanned field. Such effects can cause anomalies in the MTF functions as derived from points in machines with significant aliasing problems.
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
Ambler, Michael; Vorselaars, Bart; Allen, Michael P; Quigley, David
2017-02-21
We apply the capillary wave method, based on measurements of fluctuations in a ribbon-like interfacial geometry, to determine the solid-liquid interfacial free energy for both polytypes of ice I and the recently proposed ice 0 within a mono-atomic model of water. We discuss various choices for the molecular order parameter, which distinguishes solid from liquid, and demonstrate the influence of this choice on the interfacial stiffness. We quantify the influence of discretisation error when sampling the interfacial profile and the limits on accuracy imposed by the assumption of quasi one-dimensional geometry. The interfacial free energies of the two ice I polytypes are indistinguishable to within achievable statistical error and the small ambiguity which arises from the choice of order parameter. In the case of ice 0, we find that the large surface unit cell for low index interfaces constrains the width of the interfacial ribbon such that the accuracy of results is reduced. Nevertheless, we establish that the interfacial free energy of ice 0 at its melting temperature is similar to that of ice I under the same conditions. The rationality of a core-shell model for the nucleation of ice I within ice 0 is questioned within the context of our results.
Use of Existing CAD Models for Radiation Shielding Analysis
NASA Technical Reports Server (NTRS)
Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.
2015-01-01
The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.
NASA Technical Reports Server (NTRS)
Leberl, F. W.
1979-01-01
The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.
Spin Contamination Error in Optimized Geometry of Singlet Carbene (1A1) by Broken-Symmetry Method
NASA Astrophysics Data System (ADS)
Kitagawa, Yasutaka; Saito, Toru; Nakanishi, Yasuyuki; Kataoka, Yusuke; Matsui, Toru; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi
2009-10-01
Spin contamination errors of a broken-symmetry (BS) method in optimized structural parameters of the singlet methylene (1A1) molecule are quantitatively estimated for the Hartree-Fock (HF) method, post-HF methods (CID, CCD, MP2, MP3, MP4(SDQ)), and a hybrid DFT (B3LYP) method. For the purpose, the optimized geometry by the BS method is compared with that of an approximate spin projection (AP) method. The difference between the BS and the AP methods is about 10-20° in the HCH angle. In order to examine the basis set dependency of the spin contamination error, calculated results by STO-3G, 6-31G*, and 6-311++G** are compared. The error depends on the basis sets, but the tendencies of each method are classified into two types. Calculated energy splitting values between the triplet and the singlet states (ST gap) indicate that the contamination of the stable triplet state makes the BS singlet solution stable and the ST gap becomes small. The energy order of the spin contamination error in the ST gap is estimated to be 10-1 eV.
Compensated infrared absorption sensor for carbon dioxide and other infrared absorbing gases
Owen, Thomas E.
2005-11-29
A gas sensor, whose chamber uses filters and choppers in either a semicircular geometry or annular geometry, and incorporates separate infrared radiation filters and optical choppers. This configuration facilitates the use of a single infrared radiation source and a single detector for infrared measurements at two wavelengths, such that measurement errors may be compensated.
Bagheri, Zahra S; Melancon, David; Liu, Lu; Johnston, R Burnett; Pasini, Damiano
2017-06-01
The accuracy of Additive Manufacturing processes in fabricating porous biomaterials is currently limited by their capacity to render pore morphology that precisely matches its design. In a porous biomaterial, a geometric mismatch can result in pore occlusion and strut thinning, drawbacks that can inherently compromise bone ingrowth and severely impact mechanical performance. This paper focuses on Selective Laser Melting of porous microarchitecture and proposes a compensation scheme that reduces the morphology mismatch between as-designed and as-manufactured geometry, in particular that of the pore. A spider web analog is introduced, built out of Ti-6Al-4V powder via SLM, and morphologically characterized. Results from error analysis of strut thickness are used to generate thickness compensation relations expressed as a function of the angle each strut formed with the build plane. The scheme is applied to fabricate a set of three-dimensional porous biomaterials, which are morphologically and mechanically characterized via micro Computed Tomography, mechanically tested and numerically analyzed. For strut thickness, the results show the largest mismatch (60% from the design) occurring for horizontal members, reduces to 3.1% upon application of the compensation. Similar improvement is observed also for the mechanical properties, a factor that further corroborates the merit of the design-oriented scheme here introduced. Copyright © 2016 Elsevier Ltd. All rights reserved.
Progress in The Semantic Analysis of Scientific Code
NASA Technical Reports Server (NTRS)
Stewart, Mark
2000-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.
Low-noise, high-strength, spiral-bevel gears for helicopter transmissions
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Handschuh, Robert F.; Henry, Zachary S.; Litvin, Faydor L.
1993-01-01
Improvements in spiral-bevel gear design were investigated to support the Army/NASA Advanced Rotorcraft Transmission program. Program objectives were to reduce weight by 25 percent, reduce noise by 10 dB, and increase life to 5000 hr mean-time-between-removal. To help meet these goals, advanced-design spiral-bevel gears were tested in an OH-58D helicopter transmission using the NASA 500-hp Helicopter Transmission Test Stand. Three different gear designs tested included: (1) the current design of the OH-58D transmission except gear material X-53 instead of AISI 9310; (2) a higher-strength design the same as the current but with a full fillet radius to reduce gear tooth bending stress (and thus, weight); and (3) a lower-noise design the same as the high-strength but with modified tooth geometry to reduce transmission error and noise. Noise, vibration, and tooth strain tests were performed and significant gear stress and noise reductions were achieved.
ERIC Educational Resources Information Center
Lewis, Virginia Vimpeny
2011-01-01
Number Concepts; Measurement; Geometry; Probability; Statistics; and Patterns, Functions and Algebra. Procedural Errors were further categorized into the following content categories: Computation; Measurement; Statistics; and Patterns, Functions, and Algebra. The results of the analysis showed the main sources of error for 6th, 7th, and 8th…
Addressing Misconceptions in Geometry through Written Error Analyses
ERIC Educational Resources Information Center
Kembitzky, Kimberle A.
2009-01-01
This study examined the improvement of students' comprehension of geometric concepts through analytical writing about their own misconceptions using a reflective tool called an ERNIe (acronym for ERror aNalyIsis). The purpose of this study was to determine whether the ERNIe process could be used to correct geometric misconceptions, as well as how…
Thierry-Chef, I; Pernicka, F; Marshall, M; Cardis, E; Andreo, P
2002-01-01
An international collaborative study of cancer risk among workers in the nuclear industry is tinder way to estimate direetly the cancer risk following protracted low-dose exposure to ionising radiation. An essential aspect of this study is the characterisation and quantification of errors in available dose estimates. One major source of errors is dosemeter response in workplace exposure conditions. Little information is available on energy and geometry response for most of the 124 different dosemeters used historically in participating facilities. Experiments were therefore set up to assess this. using 10 dosemeter types representative of those used over time. Results show that the largest errors were associated with the response of early dosemeters to low-energy photon radiation. Good response was found with modern dosemeters. even at low energy. These results are being used to estimate errors in the response for each dosemeter type, used in the participating facilities, so that these can be taken into account in the estimates of cancer risk.
Morse Monte Carlo Radiation Transport Code System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one maymore » determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)« less
Global Warming Estimation from MSU: Correction for Drift and Calibration Errors
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.
2000-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.
NASA Technical Reports Server (NTRS)
Bayless, E. O.; Lawless, K. G.; Kurgan, C.; Nunes, A. C.; Graham, B. F.; Hoffman, D.; Jones, C. S.; Shepard, R.
1993-01-01
Fully automated variable-polarity plasma arc VPPA welding system developed at Marshall Space Flight Center. System eliminates defects caused by human error. Integrates many sensors with mathematical model of the weld and computer-controlled welding equipment. Sensors provide real-time information on geometry of weld bead, location of weld joint, and wire-feed entry. Mathematical model relates geometry of weld to critical parameters of welding process.
Geometry of Quantum Computation with Qudits
Luo, Ming-Xing; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun
2014-01-01
The circuit complexity of quantum qubit system evolution as a primitive problem in quantum computation has been discussed widely. We investigate this problem in terms of qudit system. Using the Riemannian geometry the optimal quantum circuits are equivalent to the geodetic evolutions in specially curved parametrization of SU(dn). And the quantum circuit complexity is explicitly dependent of controllable approximation error bound. PMID:24509710
Effect of wafer geometry on lithography chucking processes
NASA Astrophysics Data System (ADS)
Turner, Kevin T.; Sinha, Jaydeep K.
2015-03-01
Wafer flatness during exposure in lithography tools is critical and is becoming more important as feature sizes in devices shrink. While chucks are used to support and flatten the wafer during exposure, it is essential that wafer geometry be controlled as well. Thickness variations of the wafer and high-frequency wafer shape components can lead to poor flatness of the chucked wafer and ultimately patterning problems, such as defocus errors. The objective of this work is to understand how process-induced wafer geometry, resulting from deposited films with non-uniform stress, can lead to high-frequency wafer shape variations that prevent complete chucking in lithography scanners. In this paper, we discuss both the acceptable limits of wafer shape that permit complete chucking to be achieved, and how non-uniform residual stresses in films, either due to patterning or process non-uniformity, can induce high spatial frequency wafer shape components that prevent chucking. This paper describes mechanics models that relate non-uniform film stress to wafer shape and presents results for two example cases. The models and results can be used as a basis for establishing control strategies for managing process-induced wafer geometry in order to avoid wafer flatness-induced errors in lithography processes.
Modeling of Wake-vortex Aircraft Encounters. Appendix B
NASA Technical Reports Server (NTRS)
Smith, Sonya T.
1999-01-01
There are more people passing through the world's airports today than at any other time in history. With this increase in civil transport, airports are becoming capacity limited. In order to increase capacity and thus meet the demands of the flying public, the number of runways and number of flights per runway must be increased. In response to the demand, the National Aeronautics and Space Administration (NASA), in conjunction with the Federal Aviation Administration (FAA), airport operators, and the airline industry are taking steps to increase airport capacity without jeopardizing safety. Increasing the production per runway increases the likelihood that an aircraft will encounter the trailing wake-vortex of another aircraft. The hazard of a wake-vortex encounter is that heavy load aircraft can produce high intensity wake turbulence, through the development of its wing-tip vortices. A smaller aircraft following in the wake of the heavy load aircraft will experience redistribution of its aerodynamic load. This creates a safety hazard for the smaller aircraft. Understanding this load redistribution is of great importance, particularly during landing and take-off. In this research wake-vortex effects on an encountering 10% scale model of the B737-100 aircraft are modeled using both strip theory and vortex-lattice modeling methods. The models are then compared to wind tunnel data that was taken in the 30ft x 60ft wind tunnel at NASA Langley Research Center (LaRC). Comparisons are made to determine if the models will have acceptable accuracy when parts of the geometry are removed, such as the horizontal stabilizer and the vertical tail. A sensitivity analysis was also performed to observe how accurately the models could match the experimental data if there was a 10% error in the circulation strength. It was determined that both models show accurate results when the wing, horizontal stabilizer, and vertical tail were a part of the geometry. When the horizontal stabilizer and vertical tail were removed there were difficulties modeling the sideforce coefficient and pitching moment. With the removal of only the vertical tail unacceptable errors occurred when modeling the sideforce coefficient and yawing moment. Lift could not be modeled with either the full geometry or the reduced geometry attempts.
NASA Astrophysics Data System (ADS)
Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz
2017-12-01
Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects depending on the point cloud quality and distance from the measuring instrument. Varying geometrical dimensions of the same element suggest also that the point cloud does not keep a stable geometry of measured objects.
GNSS triple-frequency geometry-free and ionosphere-free track-to-track ambiguities
NASA Astrophysics Data System (ADS)
Wang, Kan; Rothacher, Markus
2015-06-01
During the last few years, more and more GNSS satellites have become available sending signals on three or even more frequencies. Examples are the GPS Block IIF and the Galileo In-Orbit-Validation (IOV) satellites. Various investigations have been performed to make use of the increasing number of frequencies to find a compromise between eliminating different error sources and minimizing the noise level, including the investigations in the triple-frequency geometry-free (GF) and ionosphere-free (IF) linear combinations, which eliminate all the geometry-related errors and the first-order term of the ionospheric delays. In contrast to the double-difference GF and IF ambiguity resolution, the resolution of the so-called track-to-track GF and IF ambiguities between two tracks of a satellite observed by the same station only requires one receiver and one satellite. Most of the remaining errors like receiver and satellite delays (electronics, cables, etc.) are eliminated, if they are not changing rapidly in time, and the noise level is reduced theoretically by a factor of square root of two compared to double-differences. This paper presents first results concerning track-to-track ambiguity resolution using triple-frequency GF and IF linear combinations based on data from the Multi-GNSS Experiment (MGEX) from April 29 to May 9, 2012 and from December 23 to December 29, 2012. This includes triple-frequency phase and code observations with different combinations of receiver tracking modes. The results show that it is possible to resolve the combined track-to-track ambiguities of the best two triple-frequency GF and IF linear combinations for the Galileo frequency triplet E1, E5b and E5a with more than 99.6% of the fractional ambiguities for the best linear combination being located within ± 0.03 cycles and more than 98.8% of the fractional ambiguities for the second best linear combination within ± 0.2 cycles, while the fractional parts of the ambiguities for the GPS frequency triplet L1, L2 and L5 are more disturbed by errors as e.g. the uncalibrated Phase Center Offsets (PCOs) and Phase Center Variations (PCVs), that have not been considered. The best two GF and IF linear combinations between tracks are helpful to detect problems in data and receivers. Furthermore, resolving the track-to-track ambiguities is helpful to connect the single-receiver ambiguities on the normal equation level and to improve ambiguity resolution.
Dimensional control of die castings
NASA Astrophysics Data System (ADS)
Karve, Aniruddha Ajit
The demand for net shape die castings, which require little or no machining, is steadily increasing. Stringent customer requirements are forcing die casters to deliver high quality castings in increasingly short lead times. Dimensional conformance to customer specifications is an inherent part of die casting quality. The dimensional attributes of a die casting are essentially dependent upon many factors--the quality of the die and the degree of control over the process variables being the two major sources of dimensional error in die castings. This study focused on investigating the nature and the causes of dimensional error in die castings. The two major components of dimensional error i.e., dimensional variability and die allowance were studied. The major effort of this study was to qualitatively and quantitatively study the effects of casting geometry and process variables on die casting dimensional variability and die allowance. This was accomplished by detailed dimensional data collection at production die casting sites. Robust feature characterization schemes were developed to describe complex casting geometry in quantitative terms. Empirical modeling was utilized to quantify the effects of the casting variables on dimensional variability and die allowance for die casting features. A number of casting geometry and process variables were found to affect dimensional variability in die castings. The dimensional variability was evaluated by comparisons with current published dimensional tolerance standards. The casting geometry was found to play a significant role in influencing the die allowance of the features measured. The predictive models developed for dimensional variability and die allowance were evaluated to test their effectiveness. Finally, the relative impact of all the components of dimensional error in die castings was put into perspective, and general guidelines for effective dimensional control in the die casting plant were laid out. The results of this study will contribute to enhancement of dimensional quality and lead time compression in the die casting industry, thus making it competitive with other net shape manufacturing processes.
Reconstruction of lightning channel geometry by localizing thunder sources
NASA Astrophysics Data System (ADS)
Bodhika, J. A. P.; Dharmarathna, W. G. D.; Fernando, Mahendra; Cooray, Vernon
2013-09-01
Thunder is generated as a result of a shock wave created by sudden expansion of air in the lightning channel due to high temperature variations. Even though the highest amplitudes of thunder signatures are generated at the return stroke stage, thunder signals generated at other events such as preliminary breakdown pulses also can be of amplitudes which are large enough to record using a sensitive system. In this study, it was attempted to reconstruct the lightning channel geometry of cloud and ground flashes by locating the temporal and spatial variations of thunder sources. Six lightning flashes were reconstructed using the recorded thunder signatures. Possible effects due to atmospheric conditions were neglected. Numerical calculations suggest that the time resolution of the recorded signal and 10 ms-1error in speed of sound leads to 2% and 3% errors, respectively, in the calculated coordinates. Reconstructed channel geometries for cloud and ground flashes agreed with the visual observations. Results suggest that the lightning channel can be successfully reconstructed using this technique.
Evaluation of LANDSAT-4 TM and MSS ground geometry performance without ground control
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A.
1983-01-01
LANDSAT thematic mapper P-data of Washington, D.C., Harrisburg, PA, and Salton Sea, CA were analyzed to determine magnitudes and causes of error in the geometric conformity of the data to known earth-surface geometry. Several tests of data geometry were performed. Intra-band and inter-band correlation and registration were investigated, exclusive of map-based ground truth. Specifically, the magnitudes and statistical trends of pixel offsets between a single band's mirror scans (due to processing procedures) were computed, and the inter-band integrity of registration was analyzed.
NASA Technical Reports Server (NTRS)
Martin, D. L.; Perry, M. J.
1994-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
NASA Astrophysics Data System (ADS)
Gao, X.; Li, T.; Zhang, X.; Geng, X.
2018-04-01
In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
NASA Astrophysics Data System (ADS)
Ostrowski, Ziemowit; Rojczyk, Marek
2017-11-01
The energy balance and heat exchange for newborn baby in radiant warmer environment are considered. The present study was performed to assess the body dry heat loss from an infant in radiant warmer, using copper cast anthropomorphic thermal manikin and controlled climate chamber laboratory setup. The total body dry heat losses were measured for varying manikin surface temperatures (nine levels between 32.5 °C and 40.1 °C) and ambient air temperatures (five levels between 23.5 °C and 29.7 °C). Radiant heat losses were estimated based on measured climate chamber wall temperatures. After subtracting radiant part, resulting convective heat loses were compared with computed ones (based on Nu correlations for common geometries). Simplified geometry of newborn baby was represented as: (a) single cylinder and (b) weighted sum of 5 cylinders and sphere. The predicted values are significantly overestimated relative to measured ones by: 28.8% (SD 23.5%) for (a) and 40.9% (SD 25.2%) for (b). This showed that use of adopted general purpose correlations for approximation of convective heat losses of newborn baby can lead to substantial errors. Hence, new Nu number correlating equation is proposed. The mean error introduced by proposed correlation was reduced to 1.4% (SD 11.97%), i.e. no significant overestimation. The thermal manikin appears to provide a precise method for the noninvasive assessment of thermal conditions in neonatal care.
Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections
NASA Astrophysics Data System (ADS)
Bertholet, J.; Wan, H.; Toftegaard, J.; Schmidt, M. L.; Chotard, F.; Parikh, P. J.; Poulsen, P. R.
2017-02-01
Radio-opaque fiducial markers of different shapes are often implanted in or near abdominal or thoracic tumors to act as surrogates for the tumor position during radiotherapy. They can be used for real-time treatment adaptation, but this requires a robust, automatic segmentation method able to handle arbitrarily shaped markers in a rotational imaging geometry such as cone-beam computed tomography (CBCT) projection images and intra-treatment images. In this study, we propose a fully automatic dynamic programming (DP) assisted template-based (TB) segmentation method. Based on an initial DP segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated. The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio. For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations.
Robust simulation of buckled structures using reduced order modeling
NASA Astrophysics Data System (ADS)
Wiebe, R.; Perez, R. A.; Spottswood, S. M.
2016-09-01
Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.
Error-correcting pairs for a public-key cryptosystem
NASA Astrophysics Data System (ADS)
Pellikaan, Ruud; Márquez-Corbella, Irene
2017-06-01
Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.
Concurrent prediction of muscle and tibiofemoral contact forces during treadmill gait.
Guess, Trent M; Stylianou, Antonis P; Kia, Mohammad
2014-02-01
Detailed knowledge of knee kinematics and dynamic loading is essential for improving the design and outcomes of surgical procedures, tissue engineering applications, prosthetics design, and rehabilitation. This study used publicly available data provided by the "Grand Challenge Competition to Predict in-vivo Knee Loads" for the 2013 American Society of Mechanical Engineers Summer Bioengineering Conference (Fregly et al., 2012, "Grand Challenge Competition to Predict in vivo Knee Loads," J. Orthop. Res., 30, pp. 503-513) to develop a full body, musculoskeletal model with subject specific right leg geometries that can concurrently predict muscle forces, ligament forces, and knee and ground contact forces. The model includes representation of foot/floor interactions and predicted tibiofemoral joint loads were compared to measured tibial loads for two different cycles of treadmill gait. The model used anthropometric data (height and weight) to scale the joint center locations and mass properties of a generic model and then used subject bone geometries to more accurately position the hip and ankle. The musculoskeletal model included 44 muscles on the right leg, and subject specific geometries were used to create a 12 degrees-of-freedom anatomical right knee that included both patellofemoral and tibiofemoral articulations. Tibiofemoral motion was constrained by deformable contacts defined between the tibial insert and femoral component geometries and by ligaments. Patellofemoral motion was constrained by contact between the patellar button and femoral component geometries and the patellar tendon. Shoe geometries were added to the feet, and shoe motion was constrained by contact between three shoe segments per foot and the treadmill surface. Six-axis springs constrained motion between the feet and shoe segments. Experimental motion capture data provided input to an inverse kinematics stage, and the final forward dynamics simulations tracked joint angle errors for the left leg and upper body and tracked muscle length errors for the right leg. The one cycle RMS errors between the predicted and measured tibia contact were 178 N and 168 N for the medial and lateral sides for the first gait cycle and 209 N and 228 N for the medial and lateral sides for the faster second gait cycle. One cycle RMS errors between predicted and measured ground reaction forces were 12 N, 13 N, and 65 N in the anterior-posterior, medial-lateral, and vertical directions for the first gait cycle and 43 N, 15 N, and 96 N in the anterior-posterior, medial-lateral, and vertical directions for the second gait cycle.
Ellipsoidal geometry in asteroid thermal models - The standard radiometric model
NASA Technical Reports Server (NTRS)
Brown, R. H.
1985-01-01
The major consequences of ellipsoidal geometry in an othewise standard radiometric model for asteroids are explored. It is shown that for small deviations from spherical shape a spherical model of the same projected area gives a reasonable aproximation to the thermal flux from an ellipsoidal body. It is suggested that large departures from spherical shape require that some correction be made for geometry. Systematic differences in the radii of asteroids derived radiometrically at 10 and 20 microns may result partly from nonspherical geometry. It is also suggested that extrapolations of the rotational variation of thermal flux from a nonspherical body based solely on the change in cross-sectional area are in error.
Proposed Standards for Ladar Signatures
1977-04-01
BDR and LRCS geometricas . parometers --------------------- 5 Figure 2. Geometry for sphere LRC:-------------------------------- 18 Figure 3. Mirror...take in the followinig LRCS definitions. Strictly speaking it is not correct to associate the LRCS of a specular spnere (a = la 2) with the "effective... Corrections due to near- field geometry or a radius of curvature on the impin ging beam have been mentioned before (36]. Also, errors due to surface
Fission cross section of 239Th and 232Th relative to 235U
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meadows, J. W.
1979-01-01
The fission cross sections of /sup 230/Th and /sup 232/Th were measured relative to /sup 235/U from near threshold to near 10 MeV. The weights of the thorium samples were determined by isotopic dilution. The weight of the uranium deposit was based on specific activity measurements of a /sup 234/U-/sup 235/U mixture and low geometry alpha counting. Corrections were made for thermal background, loss of fragments in the deposits, neutron scattering in the detector assembly, sample geometry, sample composition and the spectrum of the neutron source. Generally the systematic errors were approx. 1%. The combined systematic and statistical errors weremore » typically 1.5%. 17 references.« less
New Methods for Improved Double Circular-Arc Helical Gears
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Lu, Jian
1997-01-01
The authors have extended the application of double circular-arc helical gears for internal gear drives. The geometry of the pinion and gear tooth surfaces has been determined. The influence of errors of alignment on the transmission errors and the shift of the bearing contact have been investigated. Application of a predesigned parabolic function for the reduction of transmission errors was proposed. Methods of grinding of the pinion-gear tooth surfaces by a disk-shaped tool and a grinding worm were proposed.
Detailed Uncertainty Analysis of the ZEM-3 Measurement System
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.
Probable errors in width distributions of sea ice leads measured along a transect
NASA Technical Reports Server (NTRS)
Key, J.; Peckham, S.
1991-01-01
The degree of error expected in the measurement of widths of sea ice leads along a single transect are examined in a probabilistic sense under assumed orientation and width distributions, where both isotropic and anisotropic lead orientations are examined. Methods are developed for estimating the distribution of 'actual' widths (measured perpendicular to the local lead orientation) knowing the 'apparent' width distribution (measured along the transect), and vice versa. The distribution of errors, defined as the difference between the actual and apparent lead width, can be estimated from the two width distributions, and all moments of this distribution can be determined. The problem is illustrated with Landsat imagery and the procedure is applied to a submarine sonar transect. Results are determined for a range of geometries, and indicate the importance of orientation information if data sampled along a transect are to be used for the description of lead geometries. While the application here is to sea ice leads, the methodology can be applied to measurements of any linear feature.
Optimization of the Hartmann-Shack microlens array
NASA Astrophysics Data System (ADS)
de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William
2011-04-01
In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.
Aoyagi, Miki; Nagata, Kenji
2012-06-01
The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009 ). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a , 2001b ). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.
Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2012-01-01
The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.
Simulation of wave propagation in three-dimensional random media
NASA Astrophysics Data System (ADS)
Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1995-04-01
Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of
Gao, Jun; Wang, Shu-Peng; Gu, Xing-Fa; Yu, Tao; Fang, Li
2012-06-01
With the development of the quantitative researches using ocean color remote sensing data sets, study on reducing the uncertainty of the response of the ocean color remote sensors to the polarization characteristics of the target has been attracting more and more attention recently. Taking MODIS as an example, the polarization distribution in the whole field of view was analyzed. For the atmosphere path radiance and the apparent radiance considering the coupling between ocean surface and atmosphere, the polarization distribution has a strong relation with the imaging geometry. Compared to the contribution of the polarization from the rough sea surface, the contribution from the atmosphere is dominated. Based on the polarization characteristics in the field of view, the influence of the polarization coupling error on the quality of the satellite data was studied with the assumption of different polarization sensitivities. It was found that errors due to polarization sensitivity in the field of view are lower than water leaving radiance only when the polarization sensitivity is less than 2%. And in this case it can meet the need of the retrieval of water leaving radiative products. The method of the compensation for the polarization coupling error due to the atmosphere is proposed, which proved to be effective to improve the utilization of satellite data and the accuracy of measured radiance by remote sensor.
Correction of Rayleigh Scattering Effects in Cloud Optical Thickness Retrievals
NASA Technical Reports Server (NTRS)
Wang, Meng-Hua; King, Michael D.
1997-01-01
We present results that demonstrate the effects of Rayleigh scattering on the 9 retrieval of cloud optical thickness at a visible wavelength (0.66 Am). The sensor-measured radiance at a visible wavelength (0.66 Am) is usually used to infer remotely the cloud optical thickness from aircraft or satellite instruments. For example, we find that without removing Rayleigh scattering effects, errors in the retrieved cloud optical thickness for a thin water cloud layer (T = 2.0) range from 15 to 60%, depending on solar zenith angle and viewing geometry. For an optically thick cloud (T = 10), on the other hand, errors can range from 10 to 60% for large solar zenith angles (0-60 deg) because of enhanced Rayleigh scattering. It is therefore particularly important to correct for Rayleigh scattering contributions to the reflected signal from a cloud layer both (1) for the case of thin clouds and (2) for large solar zenith angles and all clouds. On the basis of the single scattering approximation, we propose an iterative method for effectively removing Rayleigh scattering contributions from the measured radiance signal in cloud optical thickness retrievals. The proposed correction algorithm works very well and can easily be incorporated into any cloud retrieval algorithm. The Rayleigh correction method is applicable to cloud at any pressure, providing that the cloud top pressure is known to within +/- 100 bPa. With the Rayleigh correction the errors in retrieved cloud optical thickness are usually reduced to within 3%. In cases of both thin cloud layers and thick ,clouds with large solar zenith angles, the errors are usually reduced by a factor of about 2 to over 10. The Rayleigh correction algorithm has been tested with simulations for realistic cloud optical and microphysical properties with different solar and viewing geometries. We apply the Rayleigh correction algorithm to the cloud optical thickness retrievals from experimental data obtained during the Atlantic Stratocumulus Transition Experiment (ASTEX) conducted near the Azores in June 1992 and compare these results to corresponding retrievals obtained using 0.88 Am. These results provide an example of the Rayleigh scattering effects on thin clouds and further test the Rayleigh correction scheme. Using a nonabsorbing near-infrared wavelength lambda (0.88 Am) in retrieving cloud optical thickness is only applicable over oceans, however, since most land surfaces are highly reflective at 0.88 Am. Hence successful global retrievals of cloud optical thickness should remove Rayleigh scattering effects when using reflectance measurements at 0.66 Am.
Aberration caused by the errors of alignment and adjustment in reflecting telescope
NASA Astrophysics Data System (ADS)
Tan, Hui-Song
The 2-mirror Cassegrain geometry has firmly become a standard tool for modern astronomical research. The alignment and adjustment of aplanatic (RC) Cassegrain telescope is therefore by far the most important aspect. The errors that arise in telescope through maladjustment are discussed and the aberrations are calculated for the 2.4 m telescope which will be mounted at Gaomeigu.
NASA Astrophysics Data System (ADS)
Ogohara, Kazunori; Takagi, Masahiro; Murakami, Shin-ya; Horinouchi, Takeshi; Yamada, Manabu; Kouyama, Toru; Hashimoto, George L.; Imamura, Takeshi; Yamamoto, Yukio; Kashimura, Hiroki; Hirata, Naru; Sato, Naoki; Yamazaki, Atsushi; Satoh, Takehiko; Iwagami, Naomoto; Taguchi, Makoto; Watanabe, Shigeto; Sato, Takao M.; Ohtsuki, Shoko; Fukuhara, Tetsuya; Futaguchi, Masahiko; Sakanoi, Takeshi; Kameda, Shingo; Sugiyama, Ko-ichiro; Ando, Hiroki; Lee, Yeon Joo; Nakamura, Masato; Suzuki, Makoto; Hirose, Chikako; Ishii, Nobuaki; Abe, Takumi
2017-12-01
We provide an overview of data products from observations by the Japanese Venus Climate Orbiter, Akatsuki, and describe the definition and content of each data-processing level. Levels 1 and 2 consist of non-calibrated and calibrated radiance (or brightness temperature), respectively, as well as geometry information (e.g., illumination angles). Level 3 data are global-grid data in the regular longitude-latitude coordinate system, produced from the contents of Level 2. Non-negligible errors in navigational data and instrumental alignment can result in serious errors in the geometry calculations. Such errors cause mismapping of the data and lead to inconsistencies between radiances and illumination angles, along with errors in cloud-motion vectors. Thus, we carefully correct the boresight pointing of each camera by fitting an ellipse to the observed Venusian limb to provide improved longitude-latitude maps for Level 3 products, if possible. The accuracy of the pointing correction is also estimated statistically by simulating observed limb distributions. The results show that our algorithm successfully corrects instrumental pointing and will enable a variety of studies on the Venusian atmosphere using Akatsuki data.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Ansari, Abtin; Chen, Kevin K.; Burrell, Robert R.; Egolfopoulos, Fokion N.
2018-04-01
The opposed-jet counterflow configuration is widely used to measure fundamental flame properties that are essential targets for validating chemical kinetic models. The main and key assumption of the counterflow configuration in laminar flame experiments is that the flow field is steady and quasi-one-dimensional. In this study, experiments and numerical simulations were carried out to investigate the behavior and controlling parameters of counterflowing isothermal air jets for various nozzle designs, Reynolds numbers, and surrounding geometries. The flow field in the jets' impingement region was analyzed in search of instabilities, asymmetries, and two-dimensional effects that can introduce errors when the data are compared with results of quasi-one-dimensional simulations. The modeling involved transient axisymmetric numerical simulations along with bifurcation analysis, which revealed that when the flow field is confined between walls, local bifurcation occurs, which in turn results in asymmetry, deviation from the one-dimensional assumption, and sensitivity of the flow field structure to boundary conditions and surrounding geometry. Particle image velocimetry was utilized and results revealed that for jets of equal momenta at low Reynolds numbers of the order of 300, the flow field is asymmetric with respect to the middle plane between the nozzles even in the absence of confining walls. The asymmetry was traced to the asymmetric nozzle exit velocity profiles caused by unavoidable imperfections in the nozzle assembly. The asymmetry was not detectable at high Reynolds numbers of the order of 1000 due to the reduced sensitivity of the flow field to boundary conditions. The cases investigated computationally covered a wide range of Reynolds numbers to identify designs that are minimally affected by errors in the experimental procedures or manufacturing imperfections, and the simulations results were used to identify conditions that best conform to the assumptions of quasi-one-dimensional modeling.
Design of forging process variables under uncertainties
NASA Astrophysics Data System (ADS)
Repalle, Jalaja; Grandhi, Ramana V.
2005-02-01
Forging is a complex nonlinear process that is vulnerable to various manufacturing anomalies, such as variations in billet geometry, billet/die temperatures, material properties, and workpiece and forging equipment positional errors. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion, and reduced productivity. Identifying, quantifying, and controlling the uncertainties will reduce variability risk in a manufacturing environment, which will minimize the overall production cost. In this article, various uncertainties that affect the forging process are identified, and their cumulative effect on the forging tool life is evaluated. Because the forging process simulation is time-consuming, a response surface model is used to reduce computation time by establishing a relationship between the process performance and the critical process variables. A robust design methodology is developed by incorporating reliability-based optimization techniques to obtain sound forging components. A case study of an automotive-component forging-process design is presented to demonstrate the applicability of the method.
NASA Astrophysics Data System (ADS)
Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang
2016-09-01
Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.
The rationale for intensity-modulated proton therapy in geometrically challenging cases
NASA Astrophysics Data System (ADS)
Safai, S.; Trofimov, A.; Adams, J. A.; Engelsman, M.; Bortfeld, T.
2013-09-01
Intensity-modulated proton therapy (IMPT) delivered with beam scanning is currently available at a limited number of proton centers. However, a simplified form of IMPT, the technique of field ‘patching’, has long been a standard practice in proton therapy centers. In field patching, different parts of the target volume are treated from different directions, i.e., a part of the tumor gets either full dose from a radiation field, or almost no dose. Thus, patching represents a form of binary intensity modulation. This study explores the limitations of the standard binary field patching technique, and evaluates possible dosimetric advantages of continuous dose modulations in IMPT. Specifics of the beam delivery technology, i.e., pencil beam scanning versus passive scattering and modulation, are not investigated. We have identified two geometries of target volumes and organs at risk (OAR) in which the use of field patching is severely challenged. We focused our investigations on two patient cases that exhibit these geometries: a paraspinal tumor case and a skull-base case. For those cases we performed treatment planning comparisons of three-dimensional conformal proton therapy (3DCPT) with field patching versus IMPT, using commercial and in-house software, respectively. We also analyzed the robustness of the resulting plans with respect to systematic setup errors of ±1 mm and range errors of ±2.5 mm. IMPT is able to better spare OAR while providing superior dose coverage for the challenging cases identified above. Both 3DCPT and IMPT are sensitive to setup errors and range uncertainties, with IMPT showing the largest effect. Nevertheless, when delivery uncertainties are taken into account IMPT plans remain superior regarding target coverage and OAR sparing. On the other hand, some clinical goals, such as the maximum dose to OAR, are more likely to be unmet with IMPT under large range errors. IMPT can potentially improve target coverage and OAR sparing in challenging cases, even when compared with the relatively complicated and time consuming field patching technique. While IMPT plans tend to be more sensitive to delivery uncertainties, their dosimetric advantage generally holds. Robust treatment planning techniques may further reduce the sensitivity of IMPT plans.
Minimizing Actuator-Induced Residual Error in Active Space Telescope Primary Mirrors
2010-09-01
actuator geometry, and rib-to-facesheet intersection geometry are exploited to achieve improved performance in silicon carbide ( SiC ) mirrors . A...are exploited to achieve improved performance in silicon carbide ( SiC ) mirrors . A parametric finite element model is used to explore the trade space...MOST) finite element model. The move to lightweight actively-controlled silicon carbide ( SiC ) mirrors is traced back to previous generations of space
Nonlinear growth of zonal flows by secondary instability in general magnetic geometry
Plunk, G. G.; Navarro, A. Banon
2017-02-23
Here we present a theory of the nonlinear growth of zonal flows in magnetized plasma turbulence, by the mechanism of secondary instability. The theory is derived for general magnetic geometry, and is thus applicable to both tokamaks and stellarators. The predicted growth rate is shown to compare favorably with nonlinear gyrokinetic simulations, with the error scaling as expected with the small parameter of the theory.
Errors in the Calculation of 27Al Nuclear Magnetic Resonance Chemical Shifts
Wang, Xianlong; Wang, Chengfei; Zhao, Hui
2012-01-01
Computational chemistry is an important tool for signal assignment of 27Al nuclear magnetic resonance spectra in order to elucidate the species of aluminum(III) in aqueous solutions. The accuracy of the popular theoretical models for computing the 27Al chemical shifts was evaluated by comparing the calculated and experimental chemical shifts in more than one hundred aluminum(III) complexes. In order to differentiate the error due to the chemical shielding tensor calculation from that due to the inadequacy of the molecular geometry prediction, single-crystal X-ray diffraction determined structures were used to build the isolated molecule models for calculating the chemical shifts. The results were compared with those obtained using the calculated geometries at the B3LYP/6-31G(d) level. The isotropic chemical shielding constants computed at different levels have strong linear correlations even though the absolute values differ in tens of ppm. The root-mean-square difference between the experimental chemical shifts and the calculated values is approximately 5 ppm for the calculations based on the X-ray structures, but more than 10 ppm for the calculations based on the computed geometries. The result indicates that the popular theoretical models are adequate in calculating the chemical shifts while an accurate molecular geometry is more critical. PMID:23203134
A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.
Blake, James R; Easson, William J; Hoskins, Peter R
2009-09-01
A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.
Ball bearing vibrations amplitude modeling and test comparisons
NASA Technical Reports Server (NTRS)
Hightower, Richard A., III; Bailey, Dave
1995-01-01
Bearings generate disturbances that, when combined with structural gains of a momentum wheel, contribute to induced vibration in the wheel. The frequencies generated by a ball bearing are defined by the bearing's geometry and defects. The amplitudes at these frequencies are dependent upon the actual geometry variations from perfection; therefore, a geometrically perfect bearing will produce no amplitudes at the kinematic frequencies that the design generates. Because perfect geometry can only be approached, emitted vibrations do occur. The most significant vibration is at the spin frequency and can be balanced out in the build process. Other frequencies' amplitudes, however, cannot be balanced out. Momentum wheels are usually the single largest source of vibrations in a spacecraft and can contribute to pointing inaccuracies if emitted vibrations ring the structure or are in the high-gain bandwidth of a sensitive pointing control loop. It is therefore important to be able to provide an a priori knowledge of possible amplitudes that are singular in source or are a result of interacting defects that do not reveal themselves in normal frequency prediction equations. This paper will describe the computer model that provides for the incorporation of bearing geometry errors and then develops an estimation of actual amplitudes and frequencies. Test results were correlated with the model. A momentum wheel was producing an unacceptable 74 Hz amplitude. The model was used to simulate geometry errors and proved successful in identifying a cause that was verified when the parts were inspected.
Iso-geometric analysis for neutron diffusion problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, S. K.; Eaton, M. D.; Williams, M. M. R.
Iso-geometric analysis can be viewed as a generalisation of the finite element method. It permits the exact representation of a wider range of geometries including conic sections. This is possible due to the use of concepts employed in computer-aided design. The underlying mathematical representations from computer-aided design are used to capture both the geometry and approximate the solution. In this paper the neutron diffusion equation is solved using iso-geometric analysis. The practical advantages are highlighted by looking at the problem of a circular fuel pin in a square moderator. For this problem the finite element method requires the geometry tomore » be approximated. This leads to errors in the shape and size of the interface between the fuel and the moderator. In contrast to this iso-geometric analysis allows the interface to be represented exactly. It is found that, due to a cancellation of errors, the finite element method converges more quickly than iso-geometric analysis for this problem. A fuel pin in a vacuum was then considered as this problem is highly sensitive to the leakage across the interface. In this case iso-geometric analysis greatly outperforms the finite element method. Due to the improvement in the representation of the geometry iso-geometric analysis can outperform traditional finite element methods. It is proposed that the use of iso-geometric analysis on neutron transport problems will allow deterministic solutions to be obtained for exact geometries. Something that is only currently possible with Monte Carlo techniques. (authors)« less
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2018-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
Springback compensation for a vehicle's steel body panel
NASA Astrophysics Data System (ADS)
Bałon, Paweł; Świątoniowski, Andrzej; Szostak, Janusz; Kiełbasa, Bartłomiej
2017-10-01
This paper presents a structural element of a vehicle, that is made from High Strength Steels. Application of this kind of materials considerably reduces construction mass due to high durability. Nevertheless, it results in appearance of springback that depends mainly on used material as well as part. Springback reduction helps to reach the reference geometry of the element by using the Finite Element Method software. Authors compared two methods of optimization of die shape. The first method defines the compensation of the die shape only for OP-20 and the second multi-operation method defines the compensation of the die shape for the OP-20 and OP-50 operations. Prediction of springback by the trial-and-error method is difficult and labor-intensive. Designing of dies requires using of appropriate FEM software to make them more economic and less time-consuming. Virtual compensation methods make it possible to receive precise result in a short time. Die compensation with software application was experimentally verified by the prototype die. Therefore, springback deformation becomes a critical problem especially for the HSS steel when the geometry is complex.
NASA Astrophysics Data System (ADS)
Kaufman, Lloyd; Williamson, Samuel J.; Costaribeiro, P.
1988-02-01
Recently developed small arrays of SQUID-based magnetic sensors can, if appropriately placed, locate the position of a confined biomagnetic source without moving the array. The authors present a technique with a relative accuracy of about 2 percent for calibrating such sensors having detection coils with the geometry of a second-order gradiometer. The effects of calibration error and magnetic noise on the accuracy of locating an equivalent current dipole source in the human brain are investigated for 5- and 7-sensor probes and for a pair of 7-sensor probes. With a noise level of 5 percent of peak signal, uncertainties of about 20 percent in source strength and depth for a 5-sensor probe are reduced to 8 percent for a pair of 7-sensor probes, and uncertainties of about 15 mm in lateral position are reduced to 1 mm, for the configuration considered.
Simultaneous calibration phantom commission and geometry calibration in cone beam CT
NASA Astrophysics Data System (ADS)
Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong
2017-09-01
Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.
3D acquisition and modeling for flint artefacts analysis
NASA Astrophysics Data System (ADS)
Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.
2007-07-01
In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.
NASA Astrophysics Data System (ADS)
Hodge, R.; Brasington, J.; Richards, K.
2009-04-01
The ability to collect 3D elevation data at mm-resolution from in-situ natural surfaces, such as fluvial and coastal sediments, rock surfaces, soils and dunes, is beneficial for a range of geomorphological and geological research. From these data the properties of the surface can be measured, and Digital Terrain Models (DTM) can be constructed. Terrestrial Laser Scanning (TLS) can collect quickly such 3D data with mm-precision and mm-spacing. This paper presents a methodology for the collection and processing of such TLS data, and considers how the errors in this TLS data can be quantified. TLS has been used to collect elevation data from fluvial gravel surfaces. Data were collected from areas of approximately 1 m2, with median grain sizes ranging from 18 to 63 mm. Errors are inherent in such data as a result of the precision of the TLS, and the interaction of factors including laser footprint, surface topography, surface reflectivity and scanning geometry. The methodology for the collection and processing of TLS data from complex surfaces like these fluvial sediments aims to minimise the occurrence of, and remove, such errors. The methodology incorporates taking scans from multiple scanner locations, averaging repeat scans, and applying a series of filters to remove erroneous points. Analysis of 2.5D DTMs interpolated from the processed data has identified geomorphic properties of the gravel surfaces, including the distribution of surface elevations, preferential grain orientation and grain imbrication. However, validation of the data and interpolated DTMs is limited by the availability of techniques capable of collecting independent elevation data of comparable quality. Instead, two alternative approaches to data validation are presented. The first consists of careful internal validation to optimise filter parameter values during data processing combined with a series of laboratory experiments. In the experiments, TLS data were collected from a sphere and planes with different reflectivities to measure the accuracy and precision of TLS data of these geometrically simple objects. Whilst this first approach allows the maximum precision of TLS data from complex surfaces to be estimated, it cannot quantify the distribution of errors within the TLS data and across the interpolated DTMs. The second approach enables this by simulating the collection of TLS data from complex surfaces of a known geometry. This simulated scanning has been verified through systematic comparison with laboratory TLS data. Two types of surface geometry have been investigated: simulated regular arrays of uniform spheres used to analyse the effect of sphere size; and irregular beds of spheres with the same grain size distribution as the fluvial gravels, which provide a comparable complex geometry to the field sediment surfaces. A series of simulated scans of these surfaces has enabled the magnitude and spatial distribution of errors in the interpolated DTMs to be quantified, as well as demonstrating the utility of the different processing stages in removing errors from TLS data. As well as demonstrating the application of simulated scanning as a technique to quantify errors, these results can be used to estimate errors in comparable TLS data.
An improved methodology for heliostat testing and evaluation at the Plataforma Solar de Almería
NASA Astrophysics Data System (ADS)
Monterreal, Rafael; Enrique, Raúl; Fernández-Reche, Jesús
2017-06-01
The optical quality of a heliostat basically quantifies the difference between the scattering effects of the actual solar radiation reflected on its optical surface, compared to the so called canonical dispersion, that is, the one reflected on an optical surface free of constructional errors (paradigm). However, apart from the uncertainties of the measuring process itself, the value of the optical quality must be independent of the measuring instrument; so, any new measuring techniques that provide additional information about the error sources on the heliostat reflecting surface would be welcome. That error sources are responsible for the final optical quality value, with different degrees of influence. For the constructor of heliostats it will be extremely useful to know the value of the classical sources of error and their weight on the overall optical quality of a heliostat, such as facets geometry or focal length, as well as the characteristics of the heliostat as a whole, i.e., its geometry, focal length, facets misalignment and also the possible dependence of these effects with mechanical and/or meteorological factors. It is the goal of the present paper to unfold these optical quality error sources by exploring directly the reflecting surface of the heliostat with the help of a laser-scanner device and link the result with the traditional methods of heliostat evaluation at the Plataforma Solar de Almería.
Demonstration of electronic design automation flow for massively parallel e-beam lithography
NASA Astrophysics Data System (ADS)
Brandt, Pieter; Belledent, Jérôme; Tranquillin, Céline; Figueiro, Thiago; Meunier, Stéfanie; Bayle, Sébastien; Fay, Aurélien; Milléquant, Matthieu; Icard, Beatrice; Wieland, Marco
2014-07-01
For proximity effect correction in 5 keV e-beam lithography, three elementary building blocks exist: dose modulation, geometry (size) modulation, and background dose addition. Combinations of these three methods are quantitatively compared in terms of throughput impact and process window (PW). In addition, overexposure in combination with negative bias results in PW enhancement at the cost of throughput. In proximity effect correction by over exposure (PEC-OE), the entire layout is set to fixed dose and geometry sizes are adjusted. In PEC-dose to size (DTS) both dose and geometry sizes are locally optimized. In PEC-background (BG), a background is added to correct the long-range part of the point spread function. In single e-beam tools (Gaussian or Shaped-beam), throughput heavily depends on the number of shots. In raster scan tools such as MAPPER Lithography's FLX 1200 (MATRIX platform) this is not the case and instead of pattern density, the maximum local dose on the wafer is limiting throughput. The smallest considered half-pitch is 28 nm, which may be considered the 14-nm node for Metal-1 and the 10-nm node for the Via-1 layer, achieved in a single exposure with e-beam lithography. For typical 28-nm-hp Metal-1 layouts, it was shown that dose latitudes (size of process window) of around 10% are realizable with available PEC methods. For 28-nm-hp Via-1 layouts this is even higher at 14% and up. When the layouts do not reach the highest densities (up to 10∶1 in this study), PEC-BG and PEC-OE provide the capability to trade throughput for dose latitude. At the highest densities, PEC-DTS is required for proximity correction, as this method adjusts both geometry edges and doses and will reduce the dose at the densest areas. For 28-nm-hp lines critical dimension (CD), hole&dot (CD) and line ends (edge placement error), the data path errors are typically 0.9, 1.0 and 0.7 nm (3σ) and below, respectively. There is not a clear data path performance difference between the investigated PEC methods. After the simulations, the methods were successfully validated in exposures on a MAPPER pre-alpha tool. A 28-nm half pitch Metal-1 and Via-1 layouts show good performance in resist that coincide with the simulation result. Exposures of soft-edge stitched layouts show that beam-to-beam position errors up to ±7 nm specified for FLX 1200 show no noticeable impact on CD. The research leading to these results has been performed in the frame of the industrial collaborative consortium IMAGINE.
Single plane angiography: Current applications and limitations
NASA Technical Reports Server (NTRS)
Falsetti, H. L.; Carroll, R. J.
1975-01-01
Technical errors in measurement of one plane cineangiography are identified. Examples of angiographic estimates of left ventricular geometry are given. These estimates of contractility are useful in evaluating myocardial performance.
Computer-Controlled Cylindrical Polishing Process for Large X-Ray Mirror Mandrels
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
We are developing high-energy grazing incidence shell optics for hard-x-ray telescopes. The resolution of a mirror shells depends on the quality of cylindrical mandrel from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation software is developed to model the residual surface figure errors of a mandrel due to the polishing process parameters and the tools used, as well as to compute the optical performance of the optics. The study carried out using the developed software was focused on establishing a relationship between the polishing process parameters and the mid-spatial-frequency error generation. The process parameters modeled are the speeds of the lap and the mandrel, the tool s influence function, the contour path (dwell) of the tools, their shape and the distribution of the tools on the polishing lap. Using the inputs from the mathematical model, a mandrel having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. The preliminary results of a series of polishing experiments demonstrate a qualitative agreement with the developed model. We report our first experimental results and discuss plans for further improvements in the polishing process. The ability to simulate the polishing process is critical to optimize the polishing process, improve the mandrel quality and significantly reduce the cost of mandrel production
NASA Astrophysics Data System (ADS)
Thornton, Douglas E.; Spencer, Mark F.; Perram, Glen P.
2017-09-01
The effects of deep turbulence in long-range imaging applications presents unique challenges to properly measure and correct for aberrations incurred along the atmospheric path. In practice, digital holography can detect the path-integrated wavefront distortions caused by deep turbulence, and di erent recording geometries offer different benefits depending on the application of interest. Previous studies have evaluated the performance of the off-axis image and pupil plane recording geometries for deep-turbulence sensing. This study models digital holography in the on-axis phase shifting recording geometry using wave optics simulations. In particular, the analysis models spherical-wave propagation through varying deep-turbulence conditions to estimate the complex optical field, and performance is evaluated by calculating the field-estimated Strehl ratio and RMS wavefront error. Altogether, the results show that digital holography in the on-axis phase shifting recording geometry is an effective wavefront-sensing method in the presence of deep turbulence.
Antenna Deployment for the Localization of Partial Discharges in Open-Air Substations
Robles, Guillermo; Fresno, José Manuel; Sánchez-Fernández, Matilde; Martínez-Tarifa, Juan Manuel
2016-01-01
Partial discharges are ionization processes inside or on the surface of dielectrics that can unveil insulation problems in electrical equipment. The charge accumulated is released under certain environmental and voltage conditions attacking the insulation both physically and chemically. The final consequence of a continuous occurrence of these events is the breakdown of the dielectric. The electron avalanche provokes a derivative of the electric field with respect to time, creating an electromagnetic impulse that can be detected with antennas. The localization of the source helps in the identification of the piece of equipment that has to be decommissioned. This can be done by deploying antennas and calculating the time difference of arrival (TDOA) of the electromagnetic pulses. However, small errors in this parameter can lead to great displacements of the calculated position of the source. Usually, four antennas are used to find the source but the array geometry has to be correctly deployed to have minimal errors in the localization. This paper demonstrates, by an analysis based on simulation and also experimentally, that the most common layouts are not always the best options and proposes a simple antenna layout to reduce the systematic error in the TDOA calculation due to the positions of the antennas in the array. PMID:27092501
Constraining the geometry of AGN outflows with reflection spectroscopy
NASA Astrophysics Data System (ADS)
Parker, M. L.; Buisson, D. J. K.; Jiang, J.; Gallo, L. C.; Kara, E.; Matzeu, G. A.; Walton, D. J.
2018-06-01
We collate active galactic nuclei (AGN) with reported detections of both relativistic reflection and ultra-fast outflows. By comparing the inclination of the inner disc from reflection with the line-of-sight velocity of the outflow, we show that it is possible to meaningfully constrain the geometry of the absorbing material. We find a clear relation between the velocity and inclination, and demonstrate that it can potentially be explained either by simple wind geometries or by absorption from the disc surface. Due to systematic errors and a shortage of high-quality simultaneous measurements our conclusions are tentative, but this study represents a proof-of-concept that has great potential.
Using Density Functional Theory (DFT) for the Calculation of Atomization Energies
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Partridge, Harry; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
The calculation of atomization energies using density functional theory (DFT), using the B3LYP hybrid functional, is reported. The sensitivity of the atomization energy to basis set is studied and compared with the coupled cluster singles and doubles approach with a perturbational estimate of the triples (CCSD(T)). Merging the B3LYP results with the G2(MP2) approach is also considered. It is found that replacing the geometry optimization and calculation of the zero-point energy by the analogous quantities computed using the B3LYP approach reduces the maximum error in the G2(MP2) approach. In addition to the 55 G2 atomization energies, some results for transition metal containing systems will also be presented.
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Zhao, X.
1996-01-01
A new method for design and generation of spiral bevel gears of uniform tooth depth with localized bearing contact and low level of transmission errors is considered. The main features of the proposed approach are as follows: (1) The localization of the bearing contact is achieved by the mismatch of the generating surfaces. The bearing contact may be provided in the longitudinal direction, or in the direction across the surface; and (2) The low level of transmission errors is achieved due to application of nonlinear relations between the motions of the gear and the gear head-cutter. Such relations may be provided by application of a CNC machine. The generation of the pinion is based on application of linear relations between the motions of the tool and the pinion being generated. The relations described above permit a parabolic function of transmission errors to be obtained that is able to absorb almost linear functions caused by errors of gear alignment. A computer code has been written for the meshing and contact of the spiral bevel gears with the proposed geometry. The effect of misalignment on the proposed geometry has also been determined. Numerical examples for illustration of the proposed theory have been provided.
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Zhao, X.
1996-01-01
A new method for design and generation of spiral bevel gears of uniform tooth depth with localized bearing contact and low level of transmission errors is considered. The main features of the proposed approach are as follows: (1) the localization of the bearing contact is achieved by the mismatch of the generating surfaces. The bearing contact may be provided in the longitudinal direction, or in the direction across the surface; and (2) the low level of transmission errors is achieved due to application of nonlinear relations between the motions of the gear and the gear head-cutter. Such relations may be provided by application of a CNC machine. The generation of the pinion is based on application of linear relations between the motions of the tool and the pinion being generated. The relations described above permit a parabolic function of transmission errors to be obtained that is able to absorb almost linear functions caused by errors of gear alignment. A computer code has been written for the meshing and contact of the spiral bevel gears with the proposed geometry. The effect of misalignment on the proposed geometry has also been determined. Numerical examples for illustration of the proposed theory have been provided.
CCSDT calculations of molecular equilibrium geometries
NASA Astrophysics Data System (ADS)
Halkier, Asger; Jørgensen, Poul; Gauss, Jürgen; Helgaker, Trygve
1997-08-01
CCSDT equilibrium geometries of CO, CH 2, F 2, HF, H 2O and N 2 have been calculated using the correlation-consistent cc-pVXZ basis sets. Similar calculations have been performed for SCF, CCSD and CCSD(T). In general, bond lengths decrease when improving the basis set and increase when improving the N-electron treatment. CCSD(T) provides an excellent approximation to CCSDT for bond lengths as the largest difference between CCSDT and CCSD(T) is 0.06 pm. At the CCSDT/cc-pVQZ level, basis set deficiencies, neglect of higher-order excitations, and incomplete treatment of core-correlation all give rise to errors of a few tenths of a pm, but to a large extent, these errors cancel. The CCSDT/cc-pVQZ bond lengths deviate on average only by 0.11 pm from experiment.
1986-12-01
poorly written problem statements. We decline to artificially create difficulties for experimentation. Others have encountered these issues and treated...you lose some of the weaning. The method also does not extend well to nonlinear or time-varying system (sometimes it can be don#. but it creates ...thereby introduced creates problems and solves nothing. For variable-geometry aircraft, some projects establish reference geometry values that change as
NASA Technical Reports Server (NTRS)
Webb, L. D.; Washington, H. P.
1972-01-01
Static pressure position error calibrations for a compensated and an uncompensated XB-70 nose boom pitot static probe were obtained in flight. The methods (Pacer, acceleration-deceleration, and total temperature) used to obtain the position errors over a Mach number range from 0.5 to 3.0 and an altitude range from 25,000 feet to 70,000 feet are discussed. The error calibrations are compared with the position error determined from wind tunnel tests, theoretical analysis, and a standard NACA pitot static probe. Factors which influence position errors, such as angle of attack, Reynolds number, probe tip geometry, static orifice location, and probe shape, are discussed. Also included are examples showing how the uncertainties caused by position errors can affect the inlet controls and vertical altitude separation of a supersonic transport.
Auble, Gregor T.; Holmquist-Johnson, Christopher L.; Mogen, Jim T.; Kaeding, Lynn R.; Bowen, Zachary H.
2009-01-01
Operation of Sherburne Dam in northcentral Montana has typically reduced winter streamflow in Swiftcurrent Creek downstream of the dam and resulted in passage limitations for bull trout (Salvelinus confluentus). We defined an empirical relation between discharge in Swiftcurrent Creek between Sherburne Dam and the downstream confluence with Boulder Creek and fish passage geometry by considering how the cross-sectional area of water changed as a function of discharge at a set of cross sections likely to limit fish passage. With a minimum passage window of 15 x 45 cm, passage at the cross sections increased strongly with discharge over the range of 1.2 to 24 cfs. Most cross sections did not satisfy the minimum criteria at 1.2 cfs, 25 percent had no passage at 12.7 cfs, whereas at 24 cfs all but one of 26 cross sections had some passage and 90 percent had more than 3 m of width satisfying the minimum criteria. Sensitivity analysis suggests that the overall results are not highly dependent on exact dimensions of the minimum passage window. Combining these results with estimates of natural streamflow in the study reach further suggests that natural streamflow provided adequate passage at some times in most months and locations in the study reach, although not for all individual days and locations. Limitations of our analysis include assumptions about minimum passage geometry, measurement error, limitations of the cross-sectional model we used to characterize passage, the relation of Sherburne Dam releases to streamflow in the downstream study reach in the presence of ephemeral accretions, and the relation of passage geometry as we have measured it to fish responses of movement, stranding, and mortality, especially in the presence of ice cover.
Fully Coupled Simulation of Lithium Ion Battery Cell Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trembacki, Bradley L.; Murthy, Jayathi Y.; Roberts, Scott Alan
Lithium-ion battery particle-scale (non-porous electrode) simulations applied to resolved electrode geometries predict localized phenomena and can lead to better informed decisions on electrode design and manufacturing. This work develops and implements a fully-coupled finite volume methodology for the simulation of the electrochemical equations in a lithium-ion battery cell. The model implementation is used to investigate 3D battery electrode architectures that offer potential energy density and power density improvements over traditional layer-by-layer particle bed battery geometries. Advancement of micro-scale additive manufacturing techniques has made it possible to fabricate these 3D electrode microarchitectures. A variety of 3D battery electrode geometries are simulatedmore » and compared across various battery discharge rates and length scales in order to quantify performance trends and investigate geometrical factors that improve battery performance. The energy density and power density of the 3D battery microstructures are compared in several ways, including a uniform surface area to volume ratio comparison as well as a comparison requiring a minimum manufacturable feature size. Significant performance improvements over traditional particle bed electrode designs are observed, and electrode microarchitectures derived from minimal surfaces are shown to be superior. A reduced-order volume-averaged porous electrode theory formulation for these unique 3D batteries is also developed, allowing simulations on the full-battery scale. Electrode concentration gradients are modeled using the diffusion length method, and results for plate and cylinder electrode geometries are compared to particle-scale simulation results. Additionally, effective diffusion lengths that minimize error with respect to particle-scale results for gyroid and Schwarz P electrode microstructures are determined.« less
1994-05-01
parameters and geometry factor. 57 3.2 Laminar sublayer and buffer layer thicknesses for geometry of Mudawar and Maddox.ŝ 68 3.3 Correlation constants...transfer from simulated electronic chip heat sources that are flush with the flow channel wall. Mudawar and Maddox2" have studied enhanced surfaces...bias error was not estimated; however, the percentage of heat loss measured compares with that previously reported by Mudawar and Maddox19 for a
Efficient road geometry identification from digital vector data
NASA Astrophysics Data System (ADS)
Andrášik, Richard; Bíl, Michal
2016-07-01
A new method for the automatic identification of road geometry from digital vector data is presented. The method is capable of efficiently identifying circular curves with their radii and tangents (straight sections). The average error of identification ranged from 0.01 to 1.30 % for precisely drawn data and 4.81 % in the case of actual road data with noise in the location of vertices. The results demonstrate that the proposed method is faster and more precise than commonly used techniques. This approach can be used by road administrators to complete their databases with information concerning the geometry of roads. It can also be utilized by transport engineers or traffic safety analysts to investigate the possible dependence of traffic accidents on road geometries. The method presented is applicable as well to railroads and rivers or other line features.
Giménez-Alventosa, V; Ballester, F; Vijande, J
2016-12-01
The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shape and energy consistent pseudopotentials for correlated electron systems
Needs, R. J.
2017-01-01
A method is developed for generating pseudopotentials for use in correlated-electron calculations. The paradigms of shape and energy consistency are combined and defined in terms of correlated-electron wave-functions. The resulting energy consistent correlated electron pseudopotentials (eCEPPs) are constructed for H, Li–F, Sc–Fe, and Cu. Their accuracy is quantified by comparing the relaxed molecular geometries and dissociation energies which they provide with all electron results, with all quantities evaluated using coupled cluster singles, doubles, and triples calculations. Errors inherent in the pseudopotentials are also compared with those arising from a number of approximations commonly used with pseudopotentials. The eCEPPs provide a significant improvement in optimised geometries and dissociation energies for small molecules, with errors for the latter being an order-of-magnitude smaller than for Hartree-Fock-based pseudopotentials available in the literature. Gaussian basis sets are optimised for use with these pseudopotentials. PMID:28571391
Use of scan overlap redundancy to enhance multispectral aircraft scanner data
NASA Technical Reports Server (NTRS)
Lindenlaub, J. C.; Keat, J.
1973-01-01
Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.
Hessian matrix approach for determining error field sensitivity to coil deviations
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi
2018-05-01
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.
Eigenvalue computations with the QUAD4 consistent-mass matrix
NASA Technical Reports Server (NTRS)
Butler, Thomas A.
1990-01-01
The NASTRAN user has the option of using either a lumped-mass matrix or a consistent- (coupled-) mass matrix with the QUAD4 shell finite element. At the Sixteenth NASTRAN Users' Colloquium (1988), Melvyn Marcus and associates of the David Taylor Research Center summarized a study comparing the results of the QUAD4 element with results of other NASTRAN shell elements for a cylindrical-shell modal analysis. Results of this study, in which both the lumped-and consistent-mass matrix formulations were used, implied that the consistent-mass matrix yielded poor results. In an effort to further evaluate the consistent-mass matrix, a study was performed using both a cylindrical-shell geometry and a flat-plate geometry. Modal parameters were extracted for several modes for both geometries leading to some significant conclusions. First, there do not appear to be any fundamental errors associated with the consistent-mass matrix. However, its accuracy is quite different for the two different geometries studied. The consistent-mass matrix yields better results for the flat-plate geometry and the lumped-mass matrix seems to be the better choice for cylindrical-shell geometries.
Measurement of the PPN parameter γ by testing the geometry of near-Earth space
NASA Astrophysics Data System (ADS)
Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang
2016-06-01
The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.
A tool to convert CAD models for importation into Geant4
NASA Astrophysics Data System (ADS)
Vuosalo, C.; Carlsmith, D.; Dasu, S.; Palladino, K.; LUX-ZEPLIN Collaboration
2017-10-01
The engineering design of a particle detector is usually performed in a Computer Aided Design (CAD) program, and simulation of the detector’s performance can be done with a Geant4-based program. However, transferring the detector design from the CAD program to Geant4 can be laborious and error-prone. SW2GDML is a tool that reads a design in the popular SOLIDWORKS CAD program and outputs Geometry Description Markup Language (GDML), used by Geant4 for importing and exporting detector geometries. Other methods for outputting CAD designs are available, such as the STEP format, and tools exist to convert these formats into GDML. However, these conversion methods produce very large and unwieldy designs composed of tessellated solids that can reduce Geant4 performance. In contrast, SW2GDML produces compact, human-readable GDML that employs standard geometric shapes rather than tessellated solids. This paper will describe the development and current capabilities of SW2GDML and plans for its enhancement. The aim of this tool is to automate importation of detector engineering models into Geant4-based simulation programs to support rapid, iterative cycles of detector design, simulation, and optimization.
Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change
NASA Astrophysics Data System (ADS)
Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel
2014-05-01
Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.
Validation of YCAR algorithm over East Asia TCCON sites
NASA Astrophysics Data System (ADS)
Kim, W.; Kim, J.; Jung, Y.; Lee, H.; Goo, T. Y.; Cho, C. H.; Lee, S.
2016-12-01
In order to reduce the retrieval error of TANSO-FTS column averaged CO2 concentration (XCO2) induced by aerosol, we develop the Yonsei university CArbon Retrieval (YCAR) algorithm using aerosol information from TANSO-Cloud and Aerosol Imager (TANSO-CAI), providing simultaneous aerosol optical depth properties for the same geometry and optical path along with the FTS. Also we validate the retrieved results using ground-based TCCON measurement. Particularly this study first utilized the measurements at Anmyeondo, the only TCCON site located in South Korea, which can improve the quality of validation in East Asia. After the post screening process, YCAR algorithms have higher data availability by 33 - 85 % than other operational algorithms (NIES, ACOS, UoL). Although the YCAR algorithm has higher data availability, regression analysis with TCCON measurements are better or similar to other algorithms; Regression line of YCAR algorithm is close to linear identity function with RMSE of 2.05, bias of - 0.86 ppm. According to error analysis, retrieval error of YCAR algorithm is 1.394 - 1.478 ppm at East Asia. In addition, spatio-temporal sampling error of 0.324 - 0.358 ppm for each single sounding retrieval is also analyzed with Carbon Tracker - Asia data. These results of error analysis reveal the reliability and accuracy of latest version of our YCAR algorithm. Both XCO2 values retrieved using YCAR algorithm on TANSO-FTS and TCCON measurements show the consistent increasing trend about 2.3 - 2.6 ppm per year. Comparing to the increasing rate of global background CO2 amount measured in Mauna Loa, Hawaii (2 ppm per year), the increasing trend in East Asia shows about 30% higher trend due to the rapid increase of CO2 emission from the source region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jerban, Saeed, E-mail: saeed.jerban@usherbrooke.ca
2016-08-15
The pore interconnection size of β-tricalcium phosphate scaffolds plays an essential role in the bone repair process. Although, the μCT technique is widely used in the biomaterial community, it is rarely used to measure the interconnection size because of the lack of algorithms. In addition, discrete nature of the μCT introduces large systematic errors due to the convex geometry of interconnections. We proposed, verified and validated a novel pore-level algorithm to accurately characterize the individual pores and interconnections. Specifically, pores and interconnections were isolated, labeled, and individually analyzed with high accuracy. The technique was verified thoroughly by visually inspecting andmore » verifying over 3474 properties of randomly selected pores. This extensive verification process has passed a one-percent accuracy criterion. Scanning errors inherent in the discretization, which lead to both dummy and significantly overestimated interconnections, have been examined using computer-based simulations and additional high-resolution scanning. Then accurate correction charts were developed and used to reduce the scanning errors. Only after the corrections, both the μCT and SEM-based results converged, and the novel algorithm was validated. Material scientists with access to all geometrical properties of individual pores and interconnections, using the novel algorithm, will have a more-detailed and accurate description of the substitute architecture and a potentially deeper understanding of the link between the geometric and biological interaction. - Highlights: •An algorithm is developed to analyze individually all pores and interconnections. •After pore isolating, the discretization errors in interconnections were corrected. •Dummy interconnections and overestimated sizes were due to thin material walls. •The isolating algorithm was verified through visual inspection (99% accurate). •After correcting for the systematic errors, algorithm was validated successfully.« less
Early sex differences in weighting geometric cues.
Lourenco, Stella F; Addy, Dede; Huttenlocher, Janellen; Fabian, Lydia
2011-11-01
When geometric and non-geometric information are both available for specifying location, men have been shown to rely more heavily on geometry compared to women. To shed insight on the nature and developmental origins of this sex difference, we examined how 18- to 24-month-olds represented the geometry of a surrounding (rectangular) space when direct non-geometric information (i.e. a beacon) was also available for localizing a hidden object. Children were tested on a disorientation task with multiple phases. Across experiments, boys relied more heavily than girls on geometry to guide localization, as indicated by their errors during the initial phase of the task, and by their search choices following transformations that left only geometry available, or that, under limited conditions, created a conflict between beacon and geometry. Analyses of search times suggested that girls, like boys, had encoded geometry, and testing in a square space ruled out explanations concerned with motivational and methodological variables. Taken together, the findings provide evidence for an early sex difference in the weighting of geometry. This sex difference, we suggest, reflects subtle variation in how boys and girls approach the problem of combining multiple sources of location information. 2011 Blackwell Publishing Ltd.
The role of blood vessels in high-resolution volume conductor head modeling of EEG.
Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T
2016-03-01
Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Lidar arc scan uncertainty reduction through scanning geometry optimization
NASA Astrophysics Data System (ADS)
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.; Brown, Gareth.
2016-04-01
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annual energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.
Lidar arc scan uncertainty reduction through scanning geometry optimization
NASA Astrophysics Data System (ADS)
Wang, H.; Barthelmie, R. J.; Pryor, S. C.; Brown, G.
2015-10-01
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annual energy production. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation when arc scans are used for wind resource assessment.
Evaluation of micro-GPS receivers for tracking small-bodied mammals
Shipley, Lisa A.; Forbey, Jennifer S.; Olsoy, Peter J.
2017-01-01
GPS telemetry markedly enhances the temporal and spatial resolution of animal location data, and recent advances in micro-GPS receivers permit their deployment on small mammals. One such technological advance, snapshot technology, allows for improved battery life by reducing the time to first fix via postponing recovery of satellite ephemeris (satellite location) data and processing of locations. However, no previous work has employed snapshot technology for small, terrestrial mammals. We evaluated performance of two types of micro-GPS (< 20 g) receivers (traditional and snapshot) on a small, semi-fossorial lagomorph, the pygmy rabbit (Brachylagus idahoensis), to understand how GPS errors might influence fine-scale assessments of space use and habitat selection. During stationary tests, microtopography (i.e., burrows) and satellite geometry had the largest influence on GPS fix success rate (FSR) and location error (LE). There was no difference between FSR while animals wore the GPS collars above ground (determined via light sensors) and FSR generated during stationary, above-ground trials, suggesting that animal behavior other than burrowing did not markedly influence micro-GPS errors. In our study, traditional micro-GPS receivers demonstrated similar FSR and LE to snapshot receivers, however, snapshot receivers operated inconsistently due to battery and software failures. In contrast, the initial traditional receivers deployed on animals experienced some breakages, but a modified collar design consistently functioned as expected. If such problems were resolved, snapshot technology could reduce the tradeoff between fix interval and battery life that occurs with traditional micro-GPS receivers. Our results suggest that micro-GPS receivers are capable of addressing questions about space use and resource selection by small mammals, but that additional techniques might be needed to identify use of habitat structures (e.g., burrows, tree cavities, rock crevices) that could affect micro-GPS performance and bias study results. PMID:28301495
Cross sections for H(-) and Cl(-) production from HCl by dissociative electron attachment
NASA Technical Reports Server (NTRS)
Orient, O. J.; Srivastava, S. K.
1985-01-01
A crossed target beam-electron beam collision geometry and a quadrupole mass spectrometer have been used to conduct dissociative electron attachment cross section measurements for the case of H(-) and Cl(-) production from HCl. The relative flow technique is used to determine the absolute values of cross sections. A tabulation is given of the attachment energies corresponding to various cross section maxima. Error sources contributing to total errors are also estimated.
Spiral-bevel geometry and gear train precision
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Coy, J. J.
1983-01-01
A new aproach to the solution of determination of surface principal curvatures and directions is proposed. Direct relationships between the principal curvatures and directions of the tool surface and those of the principal curvatures and directions of generated gear surface are obtained. The principal curvatures and directions of geartooth surface are obtained without using the complicated equations of these surfaces. A general theory of the train kinematical errors exerted by manufacturing and assembly errors is discussed. Two methods for the determination of the train kinematical errors can be worked out: (1) with aid of a computer, and (2) with a approximate method. Results from noise and vibration measurement conducted on a helicopter transmission are used to illustrate the principals contained in the theory of kinematic errors.
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
NASA Astrophysics Data System (ADS)
Shabanov, S. V.; Gornushkin, I. B.
2018-01-01
Data processing in the calibration-free laser-induced breakdown spectroscopy (LIBS) is usually based on the solution of the radiative transfer equation along a particular line of sight through a plasma plume. The LIBS data processing is generalized to the case when the spectral data are collected from large portions of the plume. It is shown that by adjusting the optical depth and width of the lines the spectra obtained by collecting light from an entire spherical homogeneous plasma plume can be least-square fitted to a spectrum obtained by collecting the radiation just along a plume diameter with a relative error of 10-11 or smaller (for the optical depth not exceeding 0.3) so that a mismatch of geometries of data processing and data collection cannot be detected by fitting. Despite the existence of such a perfect least-square fit, the errors in the line optical depth and width found by a data processing with an inappropriate geometry can be large. It is shown with analytic and numerical examples that the corresponding relative errors in the found elemental number densities and concentrations may be as high as 50% and 20%, respectively. Safe for a few found exceptions, these errors are impossible to eliminate from LIBS data processing unless a proper solution of the radiative transfer equation corresponding to the ray tracing in the spectral data collection is used.
A map overlay error model based on boundary geometry
Gaeuman, D.; Symanzik, J.; Schmidt, J.C.
2005-01-01
An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.
Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.
2006-06-01
The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.
A simulation of GPS and differential GPS sensors
NASA Technical Reports Server (NTRS)
Rankin, James M.
1993-01-01
The Global Positioning System (GPS) is a revolutionary advance in navigation. Users can determine latitude, longitude, and altitude by receiving range information from at least four satellites. The statistical accuracy of the user's position is directly proportional to the statistical accuracy of the range measurement. Range errors are caused by clock errors, ephemeris errors, atmospheric delays, multipath errors, and receiver noise. Selective Availability, which the military uses to intentionally degrade accuracy for non-authorized users, is a major error source. The proportionality constant relating position errors to range errors is the Dilution of Precision (DOP) which is a function of the satellite geometry. Receivers separated by relatively short distances have the same satellite and atmospheric errors. Differential GPS (DGPS) removes these errors by transmitting pseudorange corrections from a fixed receiver to a mobile receiver. The corrected pseudorange at the moving receiver is now corrupted only by errors from the receiver clock, multipath, and measurement noise. This paper describes a software package that models position errors for various GPS and DGPS systems. The error model is used in the Real-Time Simulator and Cockpit Technology workstation simulations at NASA-LaRC. The GPS/DGPS sensor can simulate enroute navigation, instrument approaches, or on-airport navigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy
Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less
A quasi-3D wire approach to model pulmonary airflow in human airways.
Kannan, Ravishekar; Chen, Z J; Singh, Narender; Przekwas, Andrzej; Delvadia, Renishkumar; Tian, Geng; Walenga, Ross
2017-07-01
The models used for modeling the airflow in the human airways are either 0-dimensional compartmental or full 3-dimensional (3D) computational fluid dynamics (CFD) models. In the former, airways are treated as compartments, and the computations are performed with several assumptions, thereby generating a low-fidelity solution. The CFD method displays extremely high fidelity since the solution is obtained by solving the conservation equations in a physiologically consistent geometry. However, CFD models (1) require millions of degrees of freedom to accurately describe the geometry and to reduce the discretization errors, (2) have convergence problems, and (3) require several days to simulate a few breathing cycles. In this paper, we present a novel, fast-running, and robust quasi-3D wire model for modeling the airflow in the human lung airway. The wire mesh is obtained by contracting the high-fidelity lung airway surface mesh to a system of connected wires, with well-defined radii. The conservation equations are then solved in each wire. These wire meshes have around O(1000) degrees of freedom and hence are 3000 to 25 000 times faster than their CFD counterparts. The 3D spatial nature is also preserved since these wires are contracted out of the actual lung STL surface. The pressure readings between the 2 approaches showed minor difference (maximum error = 15%). In general, this formulation is fast and robust, allows geometric changes, and delivers high-fidelity solutions. Hence, this approach has great potential for more complicated problems including modeling of constricted/diseased lung sections and for calibrating the lung flow resistances through parameter inversion. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ceria, Paul; Ducourtieux, Sebastien; Boukellal, Younes; Allard, Alexandre; Fischer, Nicolas; Feltin, Nicolas
2017-03-01
In order to evaluate the uncertainty budget of the LNE’s mAFM, a reference instrument dedicated to the calibration of nanoscale dimensional standards, a numerical model has been developed to evaluate the measurement uncertainty of the metrology loop involved in the XYZ positioning of the tip relative to the sample. The objective of this model is to overcome difficulties experienced when trying to evaluate some uncertainty components which cannot be experimentally determined and more specifically, the one linked to the geometry of the metrology loop. The model is based on object-oriented programming and developed under Matlab. It integrates one hundred parameters that allow the control of the geometry of the metrology loop without using analytical formulae. The created objects, mainly the reference and the mobile prism and their mirrors, the interferometers and their laser beams, can be moved and deformed freely to take into account several error sources. The Monte Carlo method is then used to determine the positioning uncertainty of the instrument by randomly drawing the parameters according to their associated tolerances and their probability density functions (PDFs). The whole process follows Supplement 2 to ‘The Guide to the Expression of the Uncertainty in Measurement’ (GUM). Some advanced statistical tools like Morris design and Sobol indices are also used to provide a sensitivity analysis by identifying the most influential parameters and quantifying their contribution to the XYZ positioning uncertainty. The approach validated in the paper shows that the actual positioning uncertainty is about 6 nm. As the final objective is to reach 1 nm, we engage in a discussion to estimate the most effective way to reduce the uncertainty.
Length measurement and spatial orientation reconstruction of single nanowires.
Prestopino, Giuseppe; Orsini, Andrea; Falconi, Christian; Bietti, Sergio; Verona-Rinati, Gianluca; Caselli, Federica; Bisegna, Paolo
2018-06-27
The accurate determination of the geometrical features of quasi one-dimensional nanostructures is mandatory for reducing errors and improving repeatability in the estimation of a number of geometry-dependent properties in nanotechnology. In this paper a method for the reconstruction of length and spatial orientation of single nanowires is presented. Those quantities are calculated from a sequence of scanning electron microscope images taken at different tilt angles using a simple 3D geometric model. The proposed method is evaluated on a collection of scanning electron microscope images of single GaAs nanowires. It is validated through the reconstruction of known geometric features of a standard reference calibration pattern. An overall uncertainty of about 1% in the estimated length of the nanowires is achieved. © 2018 IOP Publishing Ltd.
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Braun, M. J.; Mullen, R. L.; Burcham, R. E.; Diamond, W. A.
1985-01-01
High-pressure, high-temperature seal flow (leakage) data for nonrotating and rotating Raleigh-step and convergent-tapered-bore seals were characterized in terms of a normalized flow coefficient. The data for normalized Rayleigh-steip and nonrotating tapered-bore seals were in reasonable agreement with theory, but data for the rotating tapered-bore seals were not. The tapered-bore-seal operational clearances estimated from the flow data were significantly larger than calculated. Although clearances are influenced by wear from conical to cylindrical geometry and errors in clearance corrections, the problem was isolated to the shaft temperature - rotational speed clearance correction. The geometric changes support the use of some conical convergence in any seal. Under these conditions rotation reduced the normalized flow coefficiently by nearly 10 percent.
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Mullen, R. L.; Braun, M. J.; Burcham, R. E.; Diamond, W. A.
1987-01-01
High-pressure, high-temperature seal flow (leakage) data for nonrotating and rotating Raleigh-step and convergent-tapered-bore seals were characterized in terms of a normalized flow coefficient. The data for normalized Rayleigh-step and nonrotating tapered-bore seals were in reasonable agreement with theory, but data for the rotating tapered-bore seals were not. The tapered-bore-seal operational clearances estimated from the flow data were significantly larger than calculated. Although clearances are influenced by wear from conical to cylindrical geometry and errors in clearance corrections, the problem was isolated to the shaft temperature - rotational speed clearance correction. The geometric changes support the use of some conical convergence in any seal. Under these conditions rotation reduced the normalized flow coefficiently by nearly 10 percent.
An Experiment in Scientific Program Understanding
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.; Owen, Karl (Technical Monitor)
2000-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.
Can low-cost VOR and Omega receivers suffice for RNAV - A new computer-based navigation technique
NASA Technical Reports Server (NTRS)
Hollaar, L. A.
1978-01-01
It is shown that although RNAV is particularly valuable for the personal transportation segment of general aviation, it has not gained complete acceptance. This is due, in part, to its high cost and the necessary special-handling air traffic control. VOR/DME RNAV calculations are ideally suited for analog computers, and the use of microprocessor technology has been suggested for reducing RNAV costs. Three navigation systems, VOR, Omega, and DR, are compared for common navigational difficulties, such as station geometry, siting errors, ground disturbances, and terminal area coverage. The Kalman filtering technique is described with reference to the disadvantages when using a system including standard microprocessors. An integrated navigation system, using input data from various low-cost sensor systems, is presented and current simulation studies are noted.
Feasibility Study of Graphite Epoxy Antenna for a Microwave Limb Sounder Radiometer (MLSR)
NASA Technical Reports Server (NTRS)
1979-01-01
Results are presented of a feasibility study to design graphite epoxy antenna reflectors for a jet propulsion laboratory microwave limb sounder instrument (MLSR). Two general configurations of the offset elliptic parabolic reflectors are presented that will meet the requirements on geometry and reflector accuracy. The designs consist of sandwich construction for the primary reflectors, secondary reflector support structure and cross-tie members between reflector pairs. Graphite epoxy materials of 3 and 6 plies are used in the facesheets of the sandwich. An aluminum honeycomb is used for the core. A built-in adjustment system is proposed to reduce surface distortions during assembly. The manufacturing and environmental effects are expected to result in surface distortions less than .0015 inch and pointing errors less than .002 degree.
Alternative design consistency rating methods for two-lane rural highways
DOT National Transportation Integrated Search
2000-08-01
Design consistency refers to the conformance of a highway's geometry with driver expectancy. Drivers make fewer errors in the vicinity of geometric features that conform with their expectations. Techniques to evaluate the consistency of a design docu...
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Machine tools error characterization and compensation by on-line measurement of artifact
NASA Astrophysics Data System (ADS)
Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili
2009-11-01
Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.
SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotoku, J; Nakabayashi, S; Kumagai, S
Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image.more » We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)« less
Hand biometric recognition based on fused hand geometry and vascular patterns.
Park, GiTae; Kim, Soowon
2013-02-28
A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.
Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns
Park, GiTae; Kim, Soowon
2013-01-01
A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119
NASA Astrophysics Data System (ADS)
Motta, Mario; Zhang, Shiwei
2018-05-01
We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.
Computer modeling of Earthshine contamination on the VIIRS solar diffuser
NASA Astrophysics Data System (ADS)
Mills, Stephen P.; Agravante, Hiroshi; Hauss, Bruce; Klein, James E.; Weiss, Stephanie C.
2005-10-01
The Visible/Infrared Imager Radiometer Suite (VIIRS), built by Raytheon Santa Barbara Remote Sensing (SBRS) will be one of the primary earth-observing remote-sensing instruments on the National Polar-Orbiting Operational Environmental Satellite System (NPOESS). It will also be installed on the NPOESS Preparatory Project (NPP). These satellite systems fly in near-circular, sun-synchronous low-earth orbits at altitudes of approximately 830 km. VIIRS has 15 bands designed to measure reflectance with wavelengths between 412 nm and 2250 nm, and an additional 7 bands measuring primarily emissive radiance between 3700nm and 11450 nm. The calibration source for the reflective bands is a solar diffuser (SD) that is illuminated once per orbit as the satellite passes from the dark side to the light side of the earth near the poles. Sunlight enters VIIRS through an opening in the front of the instrument. An attenuation screen covers the opening, but other than this there are no other optical elements between the SD and the sun. The BRDF of the SD and the transmittance of the attenuation screen is measured pre-flight, and so with knowledge of the angles of incidence, the radiance of the sun can be computed and is used as a reference to produce calibrated reflectances and radiances. Unfortunately, the opening also allows a significant amount of reflected earthshine to illuminate part of the SD, and this component introduces radiometric error to the calibration process, referred to as earthshine contamination (ESC). The VIIRS radiometric error budget allocated a 0.3% error based on modeling of the ESC done by SBRS during the design phase. This model assumes that the earth has Lambertian BRDF with a maximum top-of-atmosphere albedo of 1. The Moderate Resolution Imaging Spectroradiometer (MODIS) has an SD with a design similar to VIIRS, and in 2003 the MODIS Science Team reported to Northrop Grumman Space Technology (NGST), the prime contractor for NPOESS, their suspicion that ESC was causing higher than expected radiometric error, and asked whether VIIRS might have a similar problem. The NPOESS Models and Simulation (M&S) team considered whether the Lambertian BRDF assumption would cause an underestimating of the ESC error. Particularly, snow, ice and water show very large BRDFs for geometries for forward scattered, near-grazing angles of incidence, and in common parlance this is called glare. The observed earth geometry during the period where the SD is illuminated by the sun has just such geometries that produce strongly forward scattering glare. In addition the SD acquisition occurs in the polar regions, where snow, ice and water are most prevalent. Using models in their Environmental Products Verification and Remote Sensing Testbed (EVEREST), the M&S team produced a model that meticulously traced the light rays from the attenuation screen to each detector and combined this with a model of the satellite orbit, with solar geometry and radiative transfer models that include the effect of the BRDF of various surfaces. This modeling showed that radiometric errors up to 4.5% over water and 1.5% over snow or ice. Clouds produce errors up to 0.8%. The likelihood of these high errors occurring has not been determined. Because of this analysis, various remedial options are now being considered.
SIMPLIFIED CALCULATION OF SOLAR FLUX ON THE SIDE WALL OF CYLINDRICAL CAVITY SOLAR RECEIVERS
NASA Technical Reports Server (NTRS)
Bhandari, P.
1994-01-01
The Simplified Calculation of Solar Flux Distribution on the Side Wall of Cylindrical Cavity Solar Receivers program employs a simple solar flux calculation algorithm for a cylindrical cavity type solar receiver. Applications of this program include the study of solar energy, heat transfer, and space power-solar dynamics engineering. The aperture plate of the receiver is assumed to be located in the focal plane of a paraboloidal concentrator, and the geometry is assumed to be axisymmetric. The concentrator slope error is assumed to be the only surface error; it is assumed that there are no pointing or misalignment errors. Using cone optics, the contour error method is utilized to handle the slope error of the concentrator. The flux distribution on the side wall is calculated by integration of the energy incident from cones emanating from all the differential elements on the concentrator. The calculations are done for any set of dimensions and properties of the receiver and the concentrator, and account for any spillover on the aperture plate. The results of this algorithm compared excellently with those predicted by more complicated programs. Because of the utilization of axial symmetry and overall simplification, it is extremely fast. It can be easily extended to other axi-symmetric receiver geometries. The program was written in Fortran 77, compiled using a Ryan McFarland compiler, and run on an IBM PC-AT with a math coprocessor. It requires 60K of memory and has been implemented under MS-DOS 3.2.1. The program was developed in 1988.
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il
2014-08-15
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.
A simplified satellite navigation system for an autonomous Mars roving vehicle.
NASA Technical Reports Server (NTRS)
Janosko, R. E.; Shen, C. N.
1972-01-01
The use of a retroflecting satellite and a laser rangefinder to navigate a Martian roving vehicle is considered in this paper. It is shown that a simple system can be employed to perform this task. An error analysis is performed on the navigation equations and it is shown that the error inherent in the scheme proposed can be minimized by the proper choice of measurement geometry. A nonlinear programming approach is used to minimize the navigation error subject to constraints that are due to geometric and laser requirements. The problem is solved for a particular set of laser parameters and the optimal solution is presented.
The Extended HANDS Characterization and Analysis of Metric Biases
NASA Astrophysics Data System (ADS)
Kelecy, T.; Knox, R.; Cognion, R.
The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.
Multilevel geometry optimization
NASA Astrophysics Data System (ADS)
Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.
2000-02-01
Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
Considerations in the design of large space structures
NASA Technical Reports Server (NTRS)
Hedgepeth, J. M.; Macneal, R. H.; Knapp, K.; Macgillivray, C. S.
1981-01-01
Several analytical studies of topics relevant to the design of large space structures are presented. Topics covered are: the types and quantitative evaluation of the disturbances to which large Earth-oriented microwave reflectors would be subjected and the resulting attitude errors of such spacecraft; the influence of errors in the structural geometry of the performance of radiofrequency antennas; the effect of creasing on the flatness of tensioned reflector membrane surface; and an analysis of the statistics of damage to truss-type structures due to meteoroids.
An Analysis LANDSAT-4 Thematic Mapper Geometric Properties
NASA Technical Reports Server (NTRS)
Walker, R. E.; Zobrist, A. L.; Bryant, N. A.; Gokhman, B.; Friedman, S. Z.; Logan, T. L.
1984-01-01
LANDSAT Thematic Mapper P-data of Washington, D. C., Harrisburg, PA, and Salton Sea, CA are analyzed to determine magnitudes and causes of error in the geometric conformity of the data to known Earth surface geometry. Several tests of data geometry are performed. Intraband and interband correlation and registration are investigated, exclusive of map based ground truth. The magnitudes and statistical trends of pixel offsets between a single band's mirror scans (due to processing procedures) are computed, and the inter-band integrity of registration is analyzed. A line to line correlation analysis is included.
Designing a compact high performance brain PET scanner—simulation study
NASA Astrophysics Data System (ADS)
Gong, Kuang; Majewski, Stan; Kinahan, Paul E.; Harrison, Robert L.; Elston, Brian F.; Manjeshwar, Ravindra; Dolinsky, Sergei; Stolin, Alexander V.; Brefczynski-Lewis, Julie A.; Qi, Jinyi
2016-05-01
The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér-Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of-interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact ‘helmet’ design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging.
Designing a compact high performance brain PET scanner—simulation study
Gong, Kuang; Majewski, Stan; Kinahan, Paul E; Harrison, Robert L; Elston, Brian F; Manjeshwar, Ravindra; Dolinsky, Sergei; Stolin, Alexander V; Brefczynski-Lewis, Julie A; Qi, Jinyi
2016-01-01
The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér–Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of- interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact ‘helmet’ design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging. PMID:27081753
SABRINA: an interactive solid geometry modeling program for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.
SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less
Accuracy in GPS/Acoustic positioning on a moored buoy moving around far from the optimal position
NASA Astrophysics Data System (ADS)
Imano, M.; Kido, M.; Ohta, Y.; Takahashi, N.; Fukuda, T.; Ochi, H.; Hino, R.
2015-12-01
For detecting the seafloor crustal deformation and Tsunami associated with large earthquakes in real-time, it is necessary to monitor them just above the possible source region. For this purpose, we have been dedicated in developing a real-time continuous observation system using a multi-purpose moored buoy. Sea-trials of the system have been carried out near the Nanakai trough in 2013 and 2014 (Takahashi et al., 2014). We especially focused on the GPS/Acoustic measurement (GPS/A) in the system for horizontal crustal movement. The GPS/A on a moored buoy has a critical drawback compared to the traditional ones, in which the data can be stacked over ranging points fixed at an optimal position. Accuracy in positioning with a single ranging from an arbitrary point is the subject to be improved in this study. Here, we report the positioning results in the buoy system using data in the 2014 sea-trial and demonstrate the improvement of the result. We also address the potential resolving power in the positioning using synthetic tests. The target GPS/A site consists of six seafloor transponders (PXPs) forming a small inner- and a large outer-triangles. The bottom of the moored cable is anchored nearly the center of the triangles. In the sea-trial, 11 times successive ranging was scheduled once a week, and we plotted positioning results from different buoy position. We confirmed that scatter in positioning using six PXPs simultaneously is ten times smaller than that using individual triangle separately. Next, we modified the definition of the PXP array geometry using data obtained in a campaign observation. Definition of an array geometry is insensitive as far as ranging is made in the same position, however, severely affects the positioning when ranging is made from various positions like the moored buoy. The modified PXP array is slightly smaller and 2m deeper than the original one. We found that the scatter of positioning results in the sea-trial is reduced from 4m to 1.7m with the modified geometry. Finally we produced a synthetic data with an artificial error in the array geometry and evaluated its effect on the positioning as a function of ranging point. This is interpreted with potential resolving power formulated in Kido (2007). In the presentation, we will show the results of synthetic test for systematic variation of the error condition.
Reconstruction of source location in a network of gravitational wave interferometric detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavalier, Fabien; Barsuglia, Matteo; Bizouard, Marie-Anne
2006-10-15
This paper deals with the reconstruction of the direction of a gravitational wave source using the detection made by a network of interferometric detectors, mainly the LIGO and Virgo detectors. We suppose that an event has been seen in coincidence using a filter applied on the three detector data streams. Using the arrival time (and its associated error) of the gravitational signal in each detector, the direction of the source in the sky is computed using a {chi}{sup 2} minimization technique. For reasonably large signals (SNR>4.5 in all detectors), the mean angular error between the real location and the reconstructedmore » one is about 1 deg. . We also investigate the effect of the network geometry assuming the same angular response for all interferometric detectors. It appears that the reconstruction quality is not uniform over the sky and is degraded when the source approaches the plane defined by the three detectors. Adding at least one other detector to the LIGO-Virgo network reduces the blind regions and in the case of 6 detectors, a precision less than 1 deg. on the source direction can be reached for 99% of the sky.« less
Shape optimization techniques for musical instrument design
NASA Astrophysics Data System (ADS)
Henrique, Luis; Antunes, Jose; Carvalho, Joao S.
2002-11-01
The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.
Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio
NASA Technical Reports Server (NTRS)
Thomas, James
2008-01-01
Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications
Masterlark, Timothy; Lu, Zhong; Rykhus, Russell P.
2006-01-01
Interferometric synthetic aperture radar (InSAR) imagery documents the consistent subsidence, during the interval 1992–1999, of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine Volcano, Alaska. We construct finite element models (FEMs) that simulate thermoelastic contraction of the PFD to account for the observed subsidence. Three-dimensional problem domains of the FEMs include a thermoelastic PFD embedded in an elastic substrate. The thickness of the PFD is initially determined from the difference between post- and pre-eruption digital elevation models (DEMs). The initial excess temperature of the PFD at the time of deposition, 640 °C, is estimated from FEM predictions and an InSAR image via standard least-squares inverse methods. Although the FEM predicts the major features of the observed transient deformation, systematic prediction errors (RMSE = 2.2 cm) are most likely associated with errors in the a priori PFD thickness distribution estimated from the DEM differences. We combine an InSAR image, FEMs, and an adaptive mesh algorithm to iteratively optimize the geometry of the PFD with respect to a minimized misfit between the predicted thermoelastic deformation and observed deformation. Prediction errors from an FEM, which includes an optimized PFD geometry and the initial excess PFD temperature estimated from the least-squares analysis, are sub-millimeter (RMSE = 0.3 mm). The average thickness (9.3 m), maximum thickness (126 m), and volume (2.1 × 107m3) of the PFD, estimated using the adaptive mesh algorithm, are about twice as large as the respective estimations for the a priori PFD geometry. Sensitivity analyses suggest unrealistic PFD thickness distributions are required for initial excess PFD temperatures outside of the range 500–800 °C.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fetterly, K; Mathew, V
Purpose: Transcatheter aortic valve replacement (TAVR) procedures provide a method to implant a prosthetic aortic valve via a minimallyinvasive, catheter-based procedure. TAVR procedures require use of interventional fluoroscopy c-arm projection angles which are aligned with the aortic valve plane to minimize prosthetic valve positioning error due to x-ray imaging parallax. The purpose of this work is to calculate the continuous range of interventional fluoroscopy c-arm projection angles which are aligned with the aortic valve plane from a single planar image of a valvuloplasty balloon inflated across the aortic valve. Methods: Computational methods to measure the 3D angular orientation of themore » aortic valve were developed. Required inputs include a planar x-ray image of a known valvuloplasty balloon inflated across the aortic valve and specifications of x-ray imaging geometry from the DICOM header of the image. A-priori knowledge of the species-specific typical range of aortic orientation is required to specify the sign of the angle of the long axis of the balloon with respect to the x-ray beam. The methods were validated ex-vivo and in a live pig. Results: Ex-vivo experiments demonstrated that the angular orientation of a stationary inflated valvuloplasty balloon can be measured with precision less than 1 degree. In-vivo pig experiments demonstrated that cardiac motion contributed to measurement variability, with precision less than 3 degrees. Error in specification of x-ray geometry directly influences measurement accuracy. Conclusion: This work demonstrates that the 3D angular orientation of the aortic valve can be calculated precisely from a planar image of a valvuloplasty balloon inflated across the aortic valve and known x-ray geometry. This method could be used to determine appropriate c-arm angular projections during TAVR procedures to minimize x-ray imaging parallax and thereby minimize prosthetic valve positioning errors.« less
Image defects from surface and alignment errors in grazing incidence telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.
1989-01-01
The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.
Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto
2006-01-01
We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.
Hessian matrix approach for determining error field sensitivity to coil deviations.
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; ...
2018-03-15
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
Hessian matrix approach for determining error field sensitivity to coil deviations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
Grid convergence errors in hemodynamic solution of patient-specific cerebral aneurysms.
Hodis, Simona; Uthamaraj, Susheil; Smith, Andrea L; Dennis, Kendall D; Kallmes, David F; Dragomir-Daescu, Dan
2012-11-15
Computational fluid dynamics (CFD) has become a cutting-edge tool for investigating hemodynamic dysfunctions in the body. It has the potential to help physicians quantify in more detail the phenomena difficult to capture with in vivo imaging techniques. CFD simulations in anatomically realistic geometries pose challenges in generating accurate solutions due to the grid distortion that may occur when the grid is aligned with complex geometries. In addition, results obtained with computational methods should be trusted only after the solution has been verified on multiple high-quality grids. The objective of this study was to present a comprehensive solution verification of the intra-aneurysmal flow results obtained on different morphologies of patient-specific cerebral aneurysms. We chose five patient-specific brain aneurysm models with different dome morphologies and estimated the grid convergence errors for each model. The grid convergence errors were estimated with respect to an extrapolated solution based on the Richardson extrapolation method, which accounts for the degree of grid refinement. For four of the five models, calculated velocity, pressure, and wall shear stress values at six different spatial locations converged monotonically, with maximum uncertainty magnitudes ranging from 12% to 16% on the finest grids. Due to the geometric complexity of the fifth model, the grid convergence errors showed oscillatory behavior; therefore, each patient-specific model required its own grid convergence study to establish the accuracy of the analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zapp, E.; Shelfer, T.; Semones, E.; Johnson, A.; Weyland, M.; Golightly, M.; Smith, G.; Dardano, C.
For roughly the past three decades, combinatorial geometries have been the predominant mode for the development of mass distribution models associated with the estimation of radiological risk for manned space flight. Examples of these are the MEVDP (Modified Elemental Volume Dose Program) vehicle representation of Liley and Hamilton, and the quadratic functional representation of the CAM/CAF (Computerized Anatomical Male/Female) human body models as modified by Billings and Yucker. These geometries, have the advantageous characteristics of being simple for a familiarized user to maintain, and because of the relative lack of any operating system or run-time library dependence, they are also easy to transfer from one computing platform to another. Unfortunately they are also limited in the amount of modeling detail possible, owing to the abstract geometric representation. In addition, combinatorial representations are also known to be error-prone in practice, since there is no convenient method for error identification (i.e. overlap, etc.), and extensive calculation and/or manual comparison may is often necessary to demonstrate that the geometry is adequately represented. We present an alternate approach linking materials -specific, CAD-based mass models directly to geometric analysis tools requiring no approximation with respect to materials , nor any meshing (i.e. tessellation) of the representative geometry. A new approach to ray tracing is presented which makes use of the fundamentals of the CAD representation to perform geometric analysis directly on the NURBS (Non-Uniform Rational BSpline) surfaces themselves. In this way we achieve a framework for- the rapid, precise development and analysis of materials-specific mass distribution models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedicto, A.; Labaume, P.; Seranne, M.
1995-08-01
Fault reconstruction techniques commonly assume horizontal pre-rift level datum to calculate fault geometry from hanging-wall geometry or viceversa. Example from Camargue basin shows that neglecting pre-rift relief may lead to important errors in calculating the fault and hanging-wall geometries, and the total extension. These errors have direct implications on reconstruction of the thermal history of basins. The Camargue basin results front NW-SE extension and rifting of the Gulf of Lion passive margin. More than 4000m of Oligo-Aquitanian syn-rift series unconformably overlie a crust previously thickened during Pyrenean orogeny. The half-graben basin is controlled by the SE-dipping listric Nimes basement faultmore » which generated a typical roll-over. As both fault and hanging-wall geometries are constrained, the pre-rift surface topography can be restored, using three reconstruction techniques. Either the constant-bed-length and constant-heave techniques produce a depression in the axis of the basin and a relief (1500m and 12(X)m respectively) atop the roll-over. The simple-shear (a=60{degrees}) technique generates a 1500m topography atop the roll-over, more coherent with regional data. Testing the hypothesis of a pre-rift horizontal datum leads to a roll-over 1400m too deep. Pre-rift surface elevation corresponds to the residual topography herited from the Pyrenean orogeny. Consequently, there has been some 1000m subsidence more than predicted by the syn-rift sedimentary record.« less
NASA Astrophysics Data System (ADS)
Radziszewski, Kacper
2017-10-01
The following paper presents the results of the research in the field of the machine learning, investigating the scope of application of the artificial neural networks algorithms as a tool in architectural design. The computational experiment was held using the backward propagation of errors method of training the artificial neural network, which was trained based on the geometry of the details of the Roman Corinthian order capital. During the experiment, as an input training data set, five local geometry parameters combined has given the best results: Theta, Pi, Rho in spherical coordinate system based on the capital volume centroid, followed by Z value of the Cartesian coordinate system and a distance from vertical planes created based on the capital symmetry. Additionally during the experiment, artificial neural network hidden layers optimal count and structure was found, giving results of the error below 0.2% for the mentioned before input parameters. Once successfully trained artificial network, was able to mimic the details composition on any other geometry type given. Despite of calculating the transformed geometry locally and separately for each of the thousands of surface points, system could create visually attractive and diverse, complex patterns. Designed tool, based on the supervised learning method of machine learning, gives possibility of generating new architectural forms- free of the designer’s imagination bounds. Implementing the infinitely broad computational methods of machine learning, or Artificial Intelligence in general, not only could accelerate and simplify the design process, but give an opportunity to explore never seen before, unpredictable forms or everyday architectural practice solutions.
Generated spiral bevel gears: Optimal machine-tool settings and tooth contact analysis
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Tsung, W. J.; Coy, J. J.; Heine, C.
1985-01-01
Geometry and kinematic errors were studied for Gleason generated spiral bevel gears. A new method was devised for choosing optimal machine settings. These settings provide zero kinematic errors and an improved bearing contact. The kinematic errors are a major source of noise and vibration in spiral bevel gears. The improved bearing contact gives improved conditions for lubrication. A computer program for tooth contact analysis was developed, and thereby the new generation process was confirmed. The new process is governed by the requirement that during the generation process there is directional constancy of the common normal of the contacting surfaces for generator and generated surfaces of pinion and gear.
Bounded Error Schemes for the Wave Equation on Complex Domains
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Ditkowski, Adi; Yefet, Amir
1998-01-01
This paper considers the application of the method of boundary penalty terms ("SAT") to the numerical solution of the wave equation on complex shapes with Dirichlet boundary conditions. A theory is developed, in a semi-discrete setting, that allows the use of a Cartesian grid on complex geometries, yet maintains the order of accuracy with only a linear temporal error-bound. A numerical example, involving the solution of Maxwell's equations inside a 2-D circular wave-guide demonstrates the efficacy of this method in comparison to others (e.g. the staggered Yee scheme) - we achieve a decrease of two orders of magnitude in the level of the L2-error.
Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 2: Appendix
NASA Technical Reports Server (NTRS)
Wiley, C. A.; Chang, M. U.
1981-01-01
A number of topics supporting the systems analysis of a multifrequency aperture-synthesizing microwave radiometer system are discussed. Fellgett's (multiple) advantage, interferometer mapping behavior, mapping geometry, image processing programs, and sampling errors are among the topics discussed. A FORTRAN program code is given.
Simulating Irregular Source Geometries for Ionian Plumes
NASA Astrophysics Data System (ADS)
McDoniel, W. J.; Goldstein, D. B.; Varghese, P. L.; Trafton, L. M.; Buchta, D. A.; Freund, J.; Kieffer, S. W.
2011-05-01
Volcanic plumes on Io respresent a complex rarefied flow into a near-vacuum in the presence of gravity. A 3D Direct Simulation Monte Carlo (DSMC) method is used to investigate the gas dynamics of such plumes, with a focus on the effects of source geometry on far-field deposition patterns. A rectangular slit and a semicircular half annulus are simulated to illustrate general principles, especially the effects of vent curvature on deposition ring structure. Then two possible models for the giant plume Pele are presented. One is a curved line source corresponding to an IR image of a particularly hot region in the volcano's caldera and the other is a large area source corresponding to the entire caldera. The former is seen to produce the features seen in observations of Pele's ring, but with an error in orientation. The latter corrects the error in orientation, but loses some structure. A hybrid simulation of 3D slit flow is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Law, P.R.
In Matsushita (J. Math. Phys. {bold 22}, 979--982 (1981), {ital ibid}. {bold 24}, 36--40 (1983)), for curvature endomorphisms for the pseudo-Euclidean space {ital R}{sup 2,2}, an analog of the Petrov classification as a basis for applications to neutral Einstein metrics on compact, orientable, four-dimensional manifolds is provided. This paper points out flaws in Matsushita's classification and, moreover, that an error in Chern's ( Pseudo-Riemannian geometry and the Gauss--Bonnet formula,'' Acad. Brasileira Ciencias {bold 35}, 17--26 (1963) and {ital Shiing}-{ital Shen} {ital Chern}: {ital Selected} {ital Papers} (Springer-Verlag, New York, 1978)) Gauss--Bonnet formula for pseudo-Riemannian geometry was incorporated in Matsushita's subsequentmore » analysis. A self-contained account of the subject of the title is presented to correct these errors, including a discussion of the validity of an appropriate analog of the Thorpe--Hitchin inequality of the Riemannian case. When the inequality obtains in the neutral case, the Euler characteristic is nonpositive, in contradistinction to Matsushita's deductions.« less
Improved imaging algorithm for bridge crack detection
NASA Astrophysics Data System (ADS)
Lu, Jingxiao; Song, Pingli; Han, Kaihong
2012-04-01
This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.
Two-dimensional mesh embedding for Galerkin B-spline methods
NASA Technical Reports Server (NTRS)
Shariff, Karim; Moser, Robert D.
1995-01-01
A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.
Modified Involute Helical Gears: Computerized Design, Simulation of Meshing, and Stress Analysis
NASA Technical Reports Server (NTRS)
Handschuh, Robert (Technical Monitor); Litvin, Faydor L.; Gonzalez-Perez, Ignacio; Carnevali, Luca; Kawasaki, Kazumasa; Fuentes-Aznar, Alfonso
2003-01-01
The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of aligment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.
Modified Involute Helical Gears: Computerized Design, Simulation of Meshing and Stress Analysis
NASA Technical Reports Server (NTRS)
2003-01-01
The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of alignment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.
XUV coherent diffraction imaging in reflection geometry with low numerical aperture.
Zürch, Michael; Kern, Christian; Spielmann, Christian
2013-09-09
We present an experimental realization of coherent diffraction imaging in reflection geometry illuminating the sample with a laser driven high harmonic generation (HHG) based XUV source. After recording the diffraction pattern in reflection geometry, the data must be corrected before the image can be reconstructed with a hybrid-input-output (HIO) algorithm. In this paper we present a detailed investigation of sources of spoiling the reconstructed image due to the nonlinear momentum transfer, errors in estimating the angle of incidence on the sample, and distortions by placing the image off center in the computation grid. Finally we provide guidelines for the necessary parameters to realize a satisfactory reconstruction within a spatial resolution in the range of one micron for an imaging scheme with a numerical aperture NA < 0.03.
Lidar arc scan uncertainty reduction through scanning geometry optimization
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.; ...
2016-04-13
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annualmore » energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30% of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. As a result, large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.« less
Lidar arc scan uncertainty reduction through scanning geometry optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annualmore » energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30% of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. As a result, large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.« less
Ziegelwanger, Harald; Majdak, Piotr; Kreuzer, Wolfgang
2015-01-01
Head-related transfer functions (HRTFs) can be numerically calculated by applying the boundary element method on the geometry of a listener’s head and pinnae. The calculation results are defined by geometrical, numerical, and acoustical parameters like the microphone used in acoustic measurements. The scope of this study was to estimate requirements on the size and position of the microphone model and on the discretization of the boundary geometry as triangular polygon mesh for accurate sound localization. The evaluation involved the analysis of localization errors predicted by a sagittal-plane localization model, the comparison of equivalent head radii estimated by a time-of-arrival model, and the analysis of actual localization errors obtained in a sound-localization experiment. While the average edge length (AEL) of the mesh had a negligible effect on localization performance in the lateral dimension, the localization performance in sagittal planes, however, degraded for larger AELs with the geometrical error as dominant factor. A microphone position at an arbitrary position at the entrance of the ear canal, a microphone size of 1 mm radius, and a mesh with 1 mm AEL yielded a localization performance similar to or better than observed with acoustically measured HRTFs. PMID:26233020
Topology of modified helical gears and Tooth Contact Analysis (TCA) program
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Zhang, Jiao
1989-01-01
The contents of this report covers: (1) development of optimal geometries for crowned helical gears; (2) a method for their generation; (3) tooth contact analysis (TCA) computer programs for the analysis of meshing and bearing contact of the crowned helical gears; and (4) modelling and simulation of gear shaft deflection. The developed method for synthesis was used to determine the optimal geometry for a crowned helical pinion surface and was directed to localize the bearing contact and guarantee favorable shape and a low level of transmission errors. Two new methods for generation of the crowned helical pinion surface are proposed. One is based on the application of a tool with a surface of revolution that slightly deviates from a regular cone surface. The tool can be used as a grinding wheel or as a shaver. The other is based on a crowning pinion tooth surface with predesigned transmission errors. The pinion tooth surface can be generated by a computer-controlled automatic grinding machine. The TCA program simulates the meshing and bearing contact of the misaligned gears. The transmission errors are also determined. The gear shaft deformation was modelled and investigated. It was found that the deflection of gear shafts has the same effect as gear misalignment.
A dose error evaluation study for 4D dose calculations
NASA Astrophysics Data System (ADS)
Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang
2014-10-01
Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex programming structure. The study discloses disadvantages for the bETM and for the DIM. DIM yielded insufficient results for large voxel sizes, while bETM is prone to errors for small voxel sizes.
A dose error evaluation study for 4D dose calculations.
Milz, Stefan; Wilkens, Jan J; Ullrich, Wolfgang
2014-11-07
Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms.The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms.The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm; 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex programming structure. The study discloses disadvantages for the bETM and for the DIM. DIM yielded insufficient results for large voxel sizes, while bETM is prone to errors for small voxel sizes.
Diagnostic x-ray dosimetry using Monte Carlo simulation.
Ioppolo, J L; Price, R I; Tuchyna, T; Buckley, C E
2002-05-21
An Electron Gamma Shower version 4 (EGS4) based user code was developed to simulate the absorbed dose in humans during routine diagnostic radiological procedures. Measurements of absorbed dose using thermoluminescent dosimeters (TLDs) were compared directly with EGS4 simulations of absorbed dose in homogeneous, heterogeneous and anthropomorphic phantoms. Realistic voxel-based models characterizing the geometry of the phantoms were used as input to the EGS4 code. The voxel geometry of the anthropomorphic Rando phantom was derived from a CT scan of Rando. The 100 kVp diagnostic energy x-ray spectra of the apparatus used to irradiate the phantoms were measured, and provided as input to the EGS4 code. The TLDs were placed at evenly spaced points symmetrically about the central beam axis, which was perpendicular to the cathode-anode x-ray axis at a number of depths. The TLD measurements in the homogeneous and heterogenous phantoms were on average within 7% of the values calculated by EGS4. Estimates of effective dose with errors less than 10% required fewer numbers of photon histories (1 x 10(7)) than required for the calculation of dose profiles (1 x 10(9)). The EGS4 code was able to satisfactorily predict and thereby provide an instrument for reducing patient and staff effective dose imparted during radiological investigations.
Diagnostic x-ray dosimetry using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Ioppolo, J. L.; Price, R. I.; Tuchyna, T.; Buckley, C. E.
2002-05-01
An Electron Gamma Shower version 4 (EGS4) based user code was developed to simulate the absorbed dose in humans during routine diagnostic radiological procedures. Measurements of absorbed dose using thermoluminescent dosimeters (TLDs) were compared directly with EGS4 simulations of absorbed dose in homogeneous, heterogeneous and anthropomorphic phantoms. Realistic voxel-based models characterizing the geometry of the phantoms were used as input to the EGS4 code. The voxel geometry of the anthropomorphic Rando phantom was derived from a CT scan of Rando. The 100 kVp diagnostic energy x-ray spectra of the apparatus used to irradiate the phantoms were measured, and provided as input to the EGS4 code. The TLDs were placed at evenly spaced points symmetrically about the central beam axis, which was perpendicular to the cathode-anode x-ray axis at a number of depths. The TLD measurements in the homogeneous and heterogenous phantoms were on average within 7% of the values calculated by EGS4. Estimates of effective dose with errors less than 10% required fewer numbers of photon histories (1 × 107) than required for the calculation of dose profiles (1 × 109). The EGS4 code was able to satisfactorily predict and thereby provide an instrument for reducing patient and staff effective dose imparted during radiological investigations.
Investigating the effects of PDC cutters geometry on ROP using the Taguchi technique
NASA Astrophysics Data System (ADS)
Jamaludin, A. A.; Mehat, N. M.; Kamaruddin, S.
2017-10-01
At times, the polycrystalline diamond compact (PDC) bit’s performance dropped and affects the rate of penetration (ROP). The objective of this project is to investigate the effect of PDC cutter geometry and optimize them. An intensive study in cutter geometry would further enhance the ROP performance. The relatively extended analysis was carried out and four significant geometry factors have been identified that directly improved the ROP. Cutter size, back rake angle, side rake angle and chamfer angle are the stated geometry factors. An appropriate optimization technique that effectively controls all influential geometry factors during cutters manufacturing is introduced and adopted in this project. By adopting L9 Taguchi OA, simulation experiment is conducted by using explicit dynamics finite element analysis. Through a structure Taguchi analysis, ANOVA confirms that the most significant geometry to improve ROP is cutter size (99.16% percentage contribution). The optimized cutter is expected to drill with high ROP that can reduce the rig time, which in its turn, may reduce the total drilling cost.
Latin-square three-dimensional gage master
Jones, L.
1981-05-12
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Latin square three dimensional gage master
Jones, Lynn L.
1982-01-01
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
NASA Astrophysics Data System (ADS)
Tarakeshwar, P.; Kim, Kwang S.; Kraka, Elfi; Cremer, Dieter
2001-10-01
The van der Waals complexes benzene-argon (BAr), fluorobenzene-argon (FAr), p-difluorobenzene-argon (DAr) are investigated at the second-order Møller-Plesset (MP2) level of theory using the 6-31+G(d), cc-pVDZ, aug-cc-pVTZ, and [7s4p2d1f/4s3p1d/3s1p] basis sets. Geometries, binding energies, harmonic vibrational frequencies, and density distribution are calculated where basis set superposition errors are corrected with the counterpoise method. Binding energies turn out to be almost identical (MP2/[7s4p2d1f/4s3p1d/3s1p]: 408, 409, 408 cm-1) for BAr, FAr, and DAr. Vibrationally corrected binding energies (357, 351, 364 cm-1) agree well with experimental values (340, 344, and 339 cm-1). Symmetry adapted perturbation theory (SAPT) is used to decompose binding energies and to examine the influence of attractive and repulsive components. Fluorine substituents lead to a contraction of the π density of the benzene ring, thus reducing the destabilizing exchange-repulsion and exchange-induction effects. At the same time, both the polarizing power and the polarizability of the π-density of the benzene derivative decreases thus reducing stabilizing induction and dispersion interactions. Stabilizing and destabilizing interactions largely cancel each other out to give comparable binding energies. The equilibrium geometry of the Ar complex is also a result of the decisive influence of exchange-repulsion and dispersive interactions.
NASA Astrophysics Data System (ADS)
Zou, Z.; Scott, M. A.; Borden, M. J.; Thomas, D. C.; Dornisch, W.; Brivadis, E.
2018-05-01
In this paper we develop the isogeometric B\\'ezier dual mortar method. It is based on B\\'ezier extraction and projection and is applicable to any spline space which can be represented in B\\'ezier form (i.e., NURBS, T-splines, LR-splines, etc.). The approach weakly enforces the continuity of the solution at patch interfaces and the error can be adaptively controlled by leveraging the refineability of the underlying dual spline basis without introducing any additional degrees of freedom. We also develop weakly continuous geometry as a particular application of isogeometric B\\'ezier dual mortaring. Weakly continuous geometry is a geometry description where the weak continuity constraints are built into properly modified B\\'ezier extraction operators. As a result, multi-patch models can be processed in a solver directly without having to employ a mortaring solution strategy. We demonstrate the utility of the approach on several challenging benchmark problems. Keywords: Mortar methods, Isogeometric analysis, B\\'ezier extraction, B\\'ezier projection
Bulk entanglement gravity without a boundary: Towards finding Einstein's equation in Hilbert space
NASA Astrophysics Data System (ADS)
Cao, ChunJun; Carroll, Sean M.
2018-04-01
We consider the emergence from quantum entanglement of spacetime geometry in a bulk region. For certain classes of quantum states in an appropriately factorized Hilbert space, a spatial geometry can be defined by associating areas along codimension-one surfaces with the entanglement entropy between either side. We show how radon transforms can be used to convert these data into a spatial metric. Under a particular set of assumptions, the time evolution of such a state traces out a four-dimensional spacetime geometry, and we argue using a modified version of Jacobson's "entanglement equilibrium" that the geometry should obey Einstein's equation in the weak-field limit. We also discuss how entanglement equilibrium is related to a generalization of the Ryu-Takayanagi formula in more general settings, and how quantum error correction can help specify the emergence map between the full quantum-gravity Hilbert space and the semiclassical limit of quantum fields propagating on a classical spacetime.
Puzzles in modern biology. V. Why are genomes overwired?
Frank, Steven A
2017-01-01
Many factors affect eukaryotic gene expression. Transcription factors, histone codes, DNA folding, and noncoding RNA modulate expression. Those factors interact in large, broadly connected regulatory control networks. An engineer following classical principles of control theory would design a simpler regulatory network. Why are genomes overwired? Neutrality or enhanced robustness may lead to the accumulation of additional factors that complicate network architecture. Dynamics progresses like a ratchet. New factors get added. Genomes adapt to the additional complexity. The newly added factors can no longer be removed without significant loss of fitness. Alternatively, highly wired genomes may be more malleable. In large networks, most genomic variants tend to have a relatively small effect on gene expression and trait values. Many small effects lead to a smooth gradient, in which traits may change steadily with respect to underlying regulatory changes. A smooth gradient may provide a continuous path from a starting point up to the highest peak of performance. A potential path of increasing performance promotes adaptability and learning. Genomes gain by the inductive process of natural selection, a trial and error learning algorithm that discovers general solutions for adapting to environmental challenge. Similarly, deeply and densely connected computational networks gain by various inductive trial and error learning procedures, in which the networks learn to reduce the errors in sequential trials. Overwiring alters the geometry of induction by smoothing the gradient along the inductive pathways of improving performance. Those overwiring benefits for induction apply to both natural biological networks and artificial deep learning networks.
Factors Governing Surface Form Accuracy In Diamond Machined Components
NASA Astrophysics Data System (ADS)
Myler, J. K.; Page, D. A.
1988-10-01
Manufacturing methods for diamond machined optical surfaces, for application at infrared wavelengths, require that a new set of criteria must be recognised for the specification of surface form. Appropriate surface form parameters are discussed with particular reference to an XY cartesian geometry CNC machine. Methods for reducing surface form errors in diamond machining are discussed for certain areas such as tool wear, tool centring, and the fixturing of the workpiece. Examples of achievable surface form accuracy are presented. Traditionally, optical surfaces have been produced by use of random polishing techniques using polishing compounds and lapping tools. For lens manufacture, the simplest surface which could be created corresponded to a sphere. The sphere is a natural outcome of a random grinding and polishing process. The measurement of the surface form accuracy would most commonly be performed using a contact test gauge plate, polished to a sphere of known radius of curvature. QA would simply be achieved using a diffuse monochromatic source and looking for residual deviations between the polished surface and the test plate. The specifications governing the manufacture of surfaces using these techniques would call for the accuracy to which the generated surface should match the test plate as defined by a spherical deviations from the required curvature and a non spherical astigmatic error. Consequently, optical design software has tolerancing routines which specifically allow the designer to assess the influence of spherical error and astigmatic error on the optical performance. The creation of general aspheric surfaces is not so straightforward using conventional polishing techniques since the surface profile is non spherical and a good approximation to a power series. For infra red applications (X = 8-12p,m) numerically controlled single point diamond turning is an alternative manufacturing technology capable of creating aspheric profiles as well as simple spheres. It is important however to realise that a diamond turning process will possess a new set of criteria which limit the accuracy of the surface profile created corresponding to a completely new set of specifications. The most important factors are:- tool centring accuracy, surface waviness, conical form error, and other rotationally symmetric non spherical errors. The fixturing of the workpiece is very different from that of a conventional lap, since in many cases the diamond machine resembles a conventional lathe geometry where the workpiece rotates at a few thousand R.P.M. Substrates must be held rigidly for rotation at such speeds as compared with more delicate mounting methods for conventional laps. Consequently the workpiece may suffer from other forms of deformation which are non-rotationally symmetric due to mounting stresses (static deformation) and stresses induced at the speed of rotation (dynamic deformation). The magnitude of each of these contributions to overall form error will be a function of the type of machine, the material, substrate, and testing design. The following sections describe each of these effects in more detail based on experience obtained on a Pneumo Precision MSG325 XY CNC machine. Certain in-process measurement techniques have been devised to minimise and quantify each contribution.
SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T. III
SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.
NASA Astrophysics Data System (ADS)
Remy, Charlotte; Lalonde, Arthur; Béliveau-Nadeau, Dominic; Carrier, Jean-François; Bouchard, Hugo
2018-01-01
The purpose of this study is to evaluate the impact of a novel tissue characterization method using dual-energy over single-energy computed tomography (DECT and SECT) on Monte Carlo (MC) dose calculations for low-dose rate (LDR) prostate brachytherapy performed in a patient like geometry. A virtual patient geometry is created using contours from a real patient pelvis CT scan, where known elemental compositions and varying densities are overwritten in each voxel. A second phantom is made with additional calcifications. Both phantoms are the ground truth with which all results are compared. Simulated CT images are generated from them using attenuation coefficients taken from the XCOM database with a 100 kVp spectrum for SECT and 80 and 140Sn kVp for DECT. Tissue segmentation for Monte Carlo dose calculation is made using a stoichiometric calibration method for the simulated SECT images. For the DECT images, Bayesian eigentissue decomposition is used. A LDR prostate brachytherapy plan is defined with 125I sources and then calculated using the EGSnrc user-code Brachydose for each case. Dose distributions and dose-volume histograms (DVH) are compared to ground truth to assess the accuracy of tissue segmentation. For noiseless images, DECT-based tissue segmentation outperforms the SECT procedure with a root mean square error (RMS) on relative errors on dose distributions respectively of 2.39% versus 7.77%, and provides DVHs closest to the reference DVHs for all tissues. For a medium level of CT noise, Bayesian eigentissue decomposition still performs better on the overall dose calculation as the RMS error is found to be of 7.83% compared to 9.15% for SECT. Both methods give a similar DVH for the prostate while the DECT segmentation remains more accurate for organs at risk and in presence of calcifications, with less than 5% of RMS errors within the calcifications versus up to 154% for SECT. In a patient-like geometry, DECT-based tissue segmentation provides dose distributions with the highest accuracy and the least bias compared to SECT. When imaging noise is considered, benefits of DECT are noticeable if important calcifications are found within the prostate.
NASA Astrophysics Data System (ADS)
Hugot, E.; Ferrari, M.; Riccardi, A.; Xompero, M.; Lemaître, G. R.; Arsenault, R.; Hubin, N.
2011-03-01
Context. Adaptive secondary mirrors (ASM) are, or will be, key components on all modern telescopes, providing improved seeing conditions or diffraction limited images, thanks to the high-order atmospheric turbulence correction obtained by controlling the shape of a thin mirror. Their development is a key milestone towards future extremely large telescopes (ELT) where this technology is mandatory for successful observations. Aims: The key point of actual adaptive secondaries technology is the thin glass mirror that acts as a deformable membrane, often aspheric. On 6 m - 8 m class telescopes, these are typically 1 m-class with a 2 mm thickness. The optical quality of this shell must be sufficiently good not to degrade the correction, meaning that high spatial frequency errors must be avoided. The innovative method presented here aims at generating aspherical shapes by elastic bending to reach high optical qualities. Methods: This method is called stress polishing and allows generating aspherical optics of a large amplitude with a simple spherical polishing with a full sized lap applied on a warped blank. The main advantage of this technique is the smooth optical quality obtained, free of high spatial frequency ripples as they are classically caused by subaperture toolmarks. After describing the manufacturing process we developed, our analytical calculations lead to a preliminary definition of the geometry of the blank, which allows a precise bending of the substrate. The finite element analysis (FEA) can be performed to refine this geometry by using an iterative method with a criterion based on the power spectral density of the displacement map of the optical surface. Results: Considering the specific case of the Very Large Telescope (VLT) deformable secondary mirror (DSM), extensive FEA were performed for the optimisation of the geometry. Results are showing that the warping will not introduce surface errors higher than 0.3 nm rms on the minimal spatial scale considered on the mirror. Simulations of the flattening operation of the shell also demonstrate that the actuators system is able to correct manufacturing surface errors coming from the warping of the blank with a residual error lower than 8 nm rms.
NASA Astrophysics Data System (ADS)
Anick, David J.
2003-12-01
A method is described for a rapid prediction of B3LYP-optimized geometries for polyhedral water clusters (PWCs). Starting with a database of 121 B3LYP-optimized PWCs containing 2277 H-bonds, linear regressions yield formulas correlating O-O distances, O-O-O angles, and H-O-H orientation parameters, with local and global cluster descriptors. The formulas predict O-O distances with a rms error of 0.85 pm to 1.29 pm and predict O-O-O angles with a rms error of 0.6° to 2.2°. An algorithm is given which uses the O-O and O-O-O formulas to determine coordinates for the oxygen nuclei of a PWC. The H-O-H formulas then determine positions for two H's at each O. For 15 test clusters, the gap between the electronic energy of the predicted geometry and the true B3LYP optimum ranges from 0.11 to 0.54 kcal/mol or 4 to 18 cal/mol per H-bond. Linear regression also identifies 14 parameters that strongly correlate with PWC electronic energy. These descriptors include the number of H-bonds in which both oxygens carry a non-H-bonding H, the number of quadrilateral faces, the number of symmetric angles in 5- and in 6-sided faces, and the square of the cluster's estimated dipole moment.
Lower limb estimation from sparse landmarks using an articulated shape model.
Zhang, Ju; Fernandez, Justin; Hislop-Jambrich, Jacqui; Besier, Thor F
2016-12-08
Rapid generation of lower limb musculoskeletal models is essential for clinically applicable patient-specific gait modeling. Estimation of muscle and joint contact forces requires accurate representation of bone geometry and pose, as well as their muscle attachment sites, which define muscle moment arms. Motion-capture is a routine part of gait assessment but contains relatively sparse geometric information. Standard methods for creating customized models from motion-capture data scale a reference model without considering natural shape variations. We present an articulated statistical shape model of the left lower limb with embedded anatomical landmarks and muscle attachment regions. This model is used in an automatic workflow, implemented in an easy-to-use software application, that robustly and accurately estimates realistic lower limb bone geometry, pose, and muscle attachment regions from seven commonly used motion-capture landmarks. Estimated bone models were validated on noise-free marker positions to have a lower (p=0.001) surface-to-surface root-mean-squared error of 4.28mm, compared to 5.22mm using standard isotropic scaling. Errors at a variety of anatomical landmarks were also lower (8.6mm versus 10.8mm, p=0.001). We improve upon standard lower limb model scaling methods with shape model-constrained realistic bone geometries, regional muscle attachment sites, and higher accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Advanced Computational Aeroacoustics Methods for Fan Noise Prediction
NASA Technical Reports Server (NTRS)
Envia, Edmane (Technical Monitor); Tam, Christopher
2003-01-01
Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.
Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Xue, Junpeng; Gao, Bo
2017-06-14
There are wide applications for zonal reconstruction methods in slope-based metrology due to its good capability of reconstructing the local details on surface profile. It was noticed in the literature that large reconstruction errors occur when using zonal reconstruction methods designed for rectangular geometry to process slopes in a quadrilateral geometry, which is a more general geometry with phase measuring deflectometry. In this paper, we present a new idea for the zonal methods for quadrilateral geometry. Instead of employing the intermediate slopes to set up height-slope equations, we consider the height increment as a more general connector to establish themore » height-slope relations for least-squares regression. The classical zonal methods and interpolation-assisted zonal methods are compared with our proposal. Results of both simulation and experiment demonstrate the effectiveness of the proposed idea. In implementation, the modification on the classical zonal methods is addressed. Finally, the new methods preserve many good aspects of the classical ones, such as the ability to handle a large incomplete slope dataset in an arbitrary aperture, and the low computational complexity comparable with the classical zonal method. Of course, the accuracy of the new methods is much higher when integrating the slopes in quadrilateral geometry.« less
Integration of multi-sensor data to measure soil surface changes
NASA Astrophysics Data System (ADS)
Eltner, Anette; Schneider, Danilo
2016-04-01
Digital elevation models (DEM) of high resolution and accuracy covering a suitable sized area of interest can be a promising approach to help understanding the processes of soil erosion. Thereby, the plot under investigation should remain undisturbed. The fragile marl landscape in Andalusia (Spain) is especially prone to soil detachment and transport with unique sediment connectivity characteristics due to the soil properties and climatic conditions. A 600 m² field plot is established and monitored during three field campaigns (Sep. 2013, Nov. 2013 and Feb. 2014). Unmanned aerial vehicle (UAV) photogrammetry and terrestrial laser scanning (TLS) are suitable tools to generate high resolution topography data that describe soil surface changes at large field plots. Thereby, the advantages of both methods are utilised in a synergetic manner. On the one hand, TLS data is assumed to comprise a higher reliability regarding consistent error behaviour than DEMs derived from overlapping UAV images. Therefore, global errors (e.g. dome effect) and local errors (e.g. DEM blunders due to erroneous image matching) within the UAV data are assessed with the DEMs produced by TLS. Furthermore, TLS point clouds allow for fast and reliable filtering of vegetation spots, which is not as straightforward within the UAV data due to known image matching problems in areas displaying plant cover. On the other hand, systematic DEM errors linked to TLS are detected and possibly corrected utilising the DEMs reconstructed from overlapping UAV images. Furthermore, TLS point clouds are filtered corresponding to the degree of point quality, which is estimated from parameters of the scan geometry (i.e. incidence angle and footprint size). This is especially relevant for this study because the area of interest is located at gentle hillslopes that are prone to soil erosion. Thus, the view of the scanning device onto the surface results in an adverse angle, which is solely slightly improved by the usage of a 4 m high tripod. Surface roughness is considered as a further parameter to evaluate the TLS point quality. The filtering tool allows for choosing each data point either from the TLS or UAV data corresponding to the data acquisition geometry and surface properties. The filtered points are merged into one point cloud, which is finally processed to reduce remaining data noise. DEM analysis reveals a continuous decrease of soil surface roughness after tillage, the reappearance of former wheel tracks and local patterns of erosion as well as accumulation.
Surface Geometry and Chemistry of Hydrothermally Synthesized Single Crystal Thorium Dioxide
2015-03-01
meeting the larger goals. I appreciate Dr. McClory’s skeptical views and critical thinking that kept me from straying into scientific error. I...Secondary Ion Mass Spectrometry .....................20 UPS ......................Ultraviolet Photoemission Spectrometry...19 M1/M2 ................... Mass of atom 1 and atom 2 ..........................................................18 Mm ........................Molar
Nearby Exo-Earth Astrometric Telescope (NEAT)
NASA Technical Reports Server (NTRS)
Shao, M.; Nemati, B.; Zhai, C.; Goullioud, R.
2011-01-01
NEAT (Nearby Exo ]Earths Astrometric Telescope) is a modest sized (1m diameter telescope) It will be capable of searching approx 100 nearby stars down to 1 Mearth planets in the habitable zone, and 200 @ 5 Mearth, 1AU. The concept addresses the major issues for ultra -precise astrometry: (1) Photon noise (0.5 deg dia field of view) (2) Optical errors (beam walk) with long focal length telescope (3) Focal plane errors , with laser metrology of the focal plane (4) PSF centroiding errors with measurement of the "True" PSF instead of using a "guess " of the true PSF, and correction for intra pixel QE non-uniformities. Technology "close" to complete. Focal plane geometry to 2e-5 pixels and centroiding to approx 4e -5 pixels.
Green, Michael V.; Ostrow, Harold G.; Seidel, Jurgen; Pomper, Martin G.
2013-01-01
Human and small-animal positron emission tomography (PET) scanners with cylindrical geometry and conventional detectors exhibit a progressive reduction in radial spatial resolution with increasing radial distance from the geometric axis of the scanner. This “depth-of-interaction” (DOI) effect is sufficiently deleterious that many laboratories have devised novel schemes to reduce the magnitude of this effect and thereby yield PET images of greater quantitative accuracy. Here we examine experimentally the effects of a particular DOI correction method (dual-scintillator phoswich detectors with pulse shape discrimination) implemented in a small-animal PET scanner by comparing the same phantom and same mouse images with and without DOI correction. The results suggest that even this relatively coarse, two-level estimate of radial gamma ray interaction position significantly reduces the DOI parallax error. This study also confirms two less appreciated advantages of DOI correction: a reduction in radial distortion and radial source displacement as a source is moved toward the edge of the field of view and a resolution improvement detectable in the central field of view likely owing to improved spatial sampling. PMID:21084028
Green, Michael V; Ostrow, Harold G; Seidel, Jurgen; Pomper, Martin G
2010-12-01
Human and small-animal positron emission tomography (PET) scanners with cylindrical geometry and conventional detectors exhibit a progressive reduction in radial spatial resolution with increasing radial distance from the geometric axis of the scanner. This "depth-of-interaction" (DOI) effect is sufficiently deleterious that many laboratories have devised novel schemes to reduce the magnitude of this effect and thereby yield PET images of greater quantitative accuracy. Here we examine experimentally the effects of a particular DOI correction method (dual-scintillator phoswich detectors with pulse shape discrimination) implemented in a small-animal PET scanner by comparing the same phantom and same mouse images with and without DOI correction. The results suggest that even this relatively coarse, two-level estimate of radial gamma ray interaction position significantly reduces the DOI parallax error. This study also confirms two less appreciated advantages of DOI correction: a reduction in radial distortion and radial source displacement as a source is moved toward the edge of the field of view and a resolution improvement detectable in the central field of view likely owing to improved spatial sampling.
Sampling Analysis of Aerosol Retrievals by Single-track Spaceborne Instrument for Climate Research
NASA Astrophysics Data System (ADS)
Geogdzhayev, I. V.; Cairns, B.; Alexandrov, M. D.; Mishchenko, M. I.
2012-12-01
We examine to what extent the reduced sampling of along-track instruments such as Cloud-Aerosol LIdar with Orthogonal Polarisation (CALIOP) and Aerosol Polarimetry Sensor (APS) affects the statistical accuracy of a satellite climatology of retrieved aerosol optical thickness (AOT) by sub-sampling the retrievals from a wide-swath imaging instrument (MODerate resolution Imaging Spectroradiometer (MODIS)). Owing to its global coverage, longevity, and extensive characterization versus ground based data, the MODIS level-2 aerosol product is an instructive testbed for assessing sampling effects on climatic means derived from along-track instrument data. The advantage of using daily pixel-level aerosol retrievals from MODIS is that limitations caused by the presence of clouds are implicit in the sample, so that their seasonal and regional variations are captured coherently. However, imager data can exhibit cross-track variability of monthly global mean AOTs caused by a scattering-angle dependence. We found that single along-track values can deviate from the imager mean by 15% over land and by more than 20% over ocean. This makes it difficult to separate natural variability from viewing-geometry artifacts complicating direct comparisons of an along-track sub-sample with the full imager data. To work around this problem, we introduce "flipped-track" sampling which, by design, is statistically equivalent to along-track sampling and while closely approximating the imager in terms of angular artifacts. We show that the flipped-track variability of global monthly mean AOT is much smaller than the cross-track one for the 7-year period considered. Over the ocean flipped-track standard error is 85% less than the cross-track one (absolute values 0.0012 versus 0.0079), and over land it is about one third of the cross-track value (0.0054 versus 0.0188) on average. This allows us to attribute the difference between the two errors to the viewing-geometry artifacts and obtain an upper limit on AOT errors caused by along-track sampling. Our results show that using along-track subsets of MODIS aerosol data directly to analyze the sampling adequacy of single-track instruments can lead to false conclusions owing to the apparent enhancement of natural aerosol variability by the track-to-track artifacts. The analysis based on the statistics of the flipped-track means yields better estimates because it allows for better separation of the viewing-geometry artifacts and true natural variability. Published assessments estimate that a global AOT change of 0.01 would yield a climatically important flux change of 0.25 W/m2. Since the standard error estimates that we have obtained are comfortably below 0.01, we conclude that along-track instruments flown on a sun-synchronous orbiting platform have sufficient spatial sampling for estimating aerosol effects on climate. Since AOT is believed to be the most variable characteristic of tropospheric aerosols, our results imply that pixel-wide along-track coverage also provides adequate statistical representation of the global distribution of aerosol microphysical parameters.
Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation.
Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John
2015-07-07
The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal ('tubular' geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal ('pancake' geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry, respectively.
Evaluation of a Cone Beam Computed Tomography Geometry for Image Guided Small Animal Irradiation
Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John
2015-01-01
The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal (“tubular” geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal (“pancake” geometry). The small animal radiation research platform (SARRP) developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Notwithstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e., pancake and tubular geometry, respectively. PMID:26083659
NASA Technical Reports Server (NTRS)
Gubarev, Mikhail V.; Kilaru, Kirenmayee; Ramsey, Brian D.
2009-01-01
We are investigating differential deposition as a way of correcting small figure errors inside full-shell grazing-incidence x-ray optics. The optics in our study are fabricated using the electroformed-nickel-replication technique, and the figure errors arise from fabrication errors in the mandrel, from which the shells are replicated, as well as errors induced during the electroforming process. Combined, these give sub-micron-scale figure deviations which limit the angular resolution of the optics to approx. 10 arcsec. Sub-micron figure errors can be corrected by selectively depositing (physical vapor deposition) material inside the shell. The requirements for this filler material are that it must not degrade the ultra-smooth surface finish necessary for efficient x-ray reflection (approx. 5 A rms), and must not be highly stressed. In addition, a technique must be found to produce well controlled and defined beams within highly constrained geometries, as some of our mirror shells are less than 3 cm in diameter.
Sure, Rebecca; Brandenburg, Jan Gerit
2015-01-01
Abstract In quantum chemical computations the combination of Hartree–Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double‐zeta quality is still widely used, for example, in the popular B3LYP/6‐31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean‐field methods. PMID:27308221
Enrichment of OpenStreetMap Data Completeness with Sidewalk Geometries Using Data Mining Techniques.
Mobasheri, Amin; Huang, Haosheng; Degrossi, Lívia Castro; Zipf, Alexander
2018-02-08
Tailored routing and navigation services utilized by wheelchair users require certain information about sidewalk geometries and their attributes to execute efficiently. Except some minor regions/cities, such detailed information is not present in current versions of crowdsourced mapping databases including OpenStreetMap. CAP4Access European project aimed to use (and enrich) OpenStreetMap for making it fit to the purpose of wheelchair routing. In this respect, this study presents a modified methodology based on data mining techniques for constructing sidewalk geometries using multiple GPS traces collected by wheelchair users during an urban travel experiment. The derived sidewalk geometries can be used to enrich OpenStreetMap to support wheelchair routing. The proposed method was applied to a case study in Heidelberg, Germany. The constructed sidewalk geometries were compared to an official reference dataset ("ground truth dataset"). The case study shows that the constructed sidewalk network overlays with 96% of the official reference dataset. Furthermore, in terms of positional accuracy, a low Root Mean Square Error (RMSE) value (0.93 m) is achieved. The article presents our discussion on the results as well as the conclusion and future research directions.
Zhang, Hai-Mei; Chen, Shi-Lu
2015-06-09
The lack of dispersion in the B3LYP functional has been proposed to be the main origin of big errors in quantum chemical modeling of a few enzymes and transition metal complexes. In this work, the essential dispersion effects that affect quantum chemical modeling are investigated. With binuclear zinc isoaspartyl dipeptidase (IAD) as an example, dispersion is included in the modeling of enzymatic reactions by two different procedures, i.e., (i) geometry optimizations followed by single-point calculations of dispersion (approach I) and (ii) the inclusion of dispersion throughout geometry optimization and energy evaluation (approach II). Based on a 169-atom chemical model, the calculations show a qualitative consistency between approaches I and II in energetics and most key geometries, demonstrating that both approaches are available with the latter preferential since both geometry and energy are dispersion-corrected in approach II. When a smaller model without Arg233 (147 atoms) was used, an inconsistency was observed, indicating that the missing dispersion interactions are essentially responsible for determining equilibrium geometries. Other technical issues and mechanistic characteristics of IAD are also discussed, in particular with respect to the effects of Arg233.
The development and evaluation of accident predictive models
NASA Astrophysics Data System (ADS)
Maleck, T. L.
1980-12-01
A mathematical model that will predict the incremental change in the dependent variables (accident types) resulting from changes in the independent variables is developed. The end product is a tool for estimating the expected number and type of accidents for a given highway segment. The data segments (accidents) are separated in exclusive groups via a branching process and variance is further reduced using stepwise multiple regression. The standard error of the estimate is calculated for each model. The dependent variables are the frequency, density, and rate of 18 types of accidents among the independent variables are: district, county, highway geometry, land use, type of zone, speed limit, signal code, type of intersection, number of intersection legs, number of turn lanes, left-turn control, all-red interval, average daily traffic, and outlier code. Models for nonintersectional accidents did not fit nor validate as well as models for intersectional accidents.
NASA Technical Reports Server (NTRS)
Stewart, R. B.; Grose, W. L.
1975-01-01
Parametric studies were made with a multilayer atmospheric diffusion model to place quantitative limits on the uncertainty of predicting ground-level toxic rocket-fuel concentrations. Exhaust distributions in the ground cloud, cloud stabilized geometry, atmospheric coefficients, the effects of exhaust plume afterburning of carbon monoxide CO, assumed surface mixing-layer division in the model, and model sensitivity to different meteorological regimes were studied. Large-scale differences in ground-level predictions are quantitatively described. Cloud alongwind growth for several meteorological conditions is shown to be in error because of incorrect application of previous diffusion theory. In addition, rocket-plume calculations indicate that almost all of the rocket-motor carbon monoxide is afterburned to carbon dioxide CO2, thus reducing toxic hazards due to CO. The afterburning is also shown to have a significant effect on cloud stabilization height and on ground-level concentrations of exhaust products.
Calculation of smooth potential energy surfaces using local electron correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mata, Ricardo A.; Werner, Hans-Joachim
2006-11-14
The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barriermore » heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.« less
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.
2006-12-01
Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.
Shi, Hongli; Yang, Zhi; Luo, Shuqian
2017-01-01
The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.
Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom
2016-01-01
Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332
Kinematic geometry of osteotomies.
Smith, Erin J; Bryant, J Tim; Ellis, Randy E
2005-01-01
This paper presents a novel method for defining an osteotomy that can be used to represent all types of osteotomy procedures. In essence, we model an osteotomy as a lower-pair mechanical joint to derive the kinematic geometry of the osteotomy. This method was implemented using a commercially available animation software suite in order to simulate a variety of osteotomy procedures. Two osteotomy procedures are presented for a femoral malunion in order to demonstrate the advantages of our kinematic model in developing optimal osteotomy plans. The benefits of this kinematic model include the ability to evaluate the effects of various kinds of osteotomy and the elimination of potentially error-prone radiographic assessment of deformities.
New Finger Biometric Method Using Near Infrared Imaging
Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul
2011-01-01
In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
Spine centerline extraction and efficient spine reading of MRI and CT data
NASA Astrophysics Data System (ADS)
Lorenz, C.; Vogt, N.; Börnert, P.; Brosch, T.
2018-03-01
Radiological assessment of the spine is performed regularly in the context of orthopedics, neurology, oncology, and trauma management. Due to the extension and curved geometry of the spinal column, reading is time-consuming and requires substantial user interaction to navigate through the data during inspection. In this paper a spine geometry guided viewing approach is proposed facilitating reading by reducing the degrees of freedom to be manipulated during inspection of the data. The method is using the spine centerline as a representation of the spine geometry. We assume that renderings most useful for reading are those that can be locally defined based on a rotation and translation relative to the spine centerline. The resulting renderings conserve locally the relation to the spine and lead to curved planar reformats that can be adjusted using a small set of parameters to minimize user interaction. The spine centerline is extracted by an automated image to image foveal fully convolutional neural network (FFCN) based approach. The network consists of three parallel convolutional pathways working on different levels of resolution and processed fields of view. The outputs of the parallel pathways are combined by a subsequent feature integration pathway to yield the (final) centerline probability map, which is converted into a set of spine centerline points. The network has been trained separately using two data set types, one comprising a mixture of T1 and T2 weighted spine MR images and one using CT image data. We achieve an average centerline position error of 1.7 mm for MR and 0.9 mm for CT and a DICE coefficient of 0.84 for MR and 0.95 for CT. Based on the thus obtained centerline viewing and multi-planar reformatting can be easily facilitated.
NASA Astrophysics Data System (ADS)
Bolkas, Dimitrios; Martinez, Aaron
2018-01-01
Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.
Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials
NASA Astrophysics Data System (ADS)
Finster, Felix; Smoller, Joel
2010-09-01
A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutzler, F.W.; Painter, G.S.
1992-02-15
A fully self-consistent series of nonlocal (gradient) density-functional calculations has been carried out using the augmented-Gaussian-orbital method to determine the magnitude of gradient corrections to the potential-energy curves of the first-row diatomics, Li{sub 2} through F{sub 2}. Both the Langreth-Mehl-Hu and the Perdew-Wang gradient-density functionals were used in calculations of the binding energy, bond length, and vibrational frequency for each dimer. Comparison with results obtained in the local-spin-density approximation (LSDA) using the Vosko-Wilk-Nusair functional, and with experiment, reveals that bond lengths and vibrational frequencies are rather insensitive to details of the gradient functionals, including self-consistency effects, but the gradient correctionsmore » reduce the overbinding commonly observed in the LSDA calculations of first-row diatomics (with the exception of Li{sub 2}, the gradient-functional binding-energy error is only 50--12 % of the LSDA error). The improved binding energies result from a large differential energy lowering, which occurs in open-shell atoms relative to the diatomics. The stabilization of the atom arises from the use of nonspherical charge and spin densities in the gradient-functional calculations. This stabilization is negligibly small in LSDA calculations performed with nonspherical densities.« less
Layout Slam with Model Based Loop Closure for 3d Indoor Corridor Reconstruction
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Jung, J.; Shahbazi, M.; Kang, J.
2018-05-01
In this paper, we extend a recently proposed visual Simultaneous Localization and Mapping (SLAM) techniques, known as Layout SLAM, to make it robust against error accumulations, abrupt changes of camera orientation and miss-association of newly visited parts of the scene to the previously visited landmarks. To do so, we present a novel technique of loop closing based on layout model matching; i.e., both model information (topology and geometry of reconstructed models) and image information (photometric features) are used to address a loop-closure detection. The advantages of using the layout-related information in the proposed loop-closing technique are twofold. First, it imposes a metric constraint on the global map consistency and, thus, adjusts the mapping scale drifts. Second, it can reduce matching ambiguity in the context of indoor corridors, where the scene is homogenously textured and extracting sufficient amount of distinguishable point features is a challenging task. To test the impact of the proposed technique on the performance of Layout SLAM, we have performed the experiments on wide-angle videos captured by a handheld camera. This dataset was collected from the indoor corridors of a building at York University. The obtained results demonstrate that the proposed method successfully detects the instances of loops while producing very limited trajectory errors.
Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-05-12
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Theoretical model for design and analysis of protectional eyewear.
Zelzer, B; Speck, A; Langenbucher, A; Eppig, T
2013-05-01
Protectional eyewear has to fulfill both mechanical and optical stress tests. To pass those optical tests the surfaces of safety spectacles have to be optimized to minimize optical aberrations. Starting with the surface data of three measured safety spectacles, a theoretical spectacle model (four spherical surfaces) is recalculated first and then optimized while keeping the front surface unchanged. Next to spherical power, astigmatic power and prism imbalance we used the wavefront error (five different viewing directions) to simulate the optical performance and to optimize the safety spectacle geometries. All surfaces were spherical (maximum global deviation 'peak-to-valley' between the measured surface and the best-fit sphere: 0.132mm). Except the spherical power of the model Axcont (-0.07m(-1)) all simulated optical performance before optimization was better than the limits defined by standards. The optimization reduced the wavefront error by 1% to 0.150 λ (Windor/Infield), by 63% to 0.194 λ (Axcont/Bolle) and by 55% to 0.199 λ (2720/3M) without dropping below the measured thickness. The simulated optical performance of spectacle designs could be improved when using a smart optimization. A good optical design counteracts degradation by parameter variation throughout the manufacturing process. Copyright © 2013. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Bergeron, Charles; Labelle, Hubert; Ronsky, Janet; Zernicke, Ronald
2005-04-01
Spinal curvature progression in scoliosis patients is monitored from X-rays, and this serial exposure to harmful radiation increases the incidence of developing cancer. With the aim of reducing the invasiveness of follow-up, this study seeks to relate the three-dimensional external surface to the internal geometry, having assumed that that the physiological links between these are sufficiently regular across patients. A database was used of 194 quasi-simultaneous acquisitions of two X-rays and a 3D laser scan of the entire trunk. Data was processed to sets of datapoints representing the trunk surface and spinal curve. Functional data analyses were performed using generalized Fourier series using a Haar basis and functional minimum noise fractions. The resulting coefficients became inputs and outputs, respectively, to an array of support vector regression (SVR) machines. SVR parameters were set based on theoretical results, and cross-validation increased confidence in the system's performance. Predicted lateral and frontal views of the spinal curve from the back surface demonstrated average L2-errors of 6.13 and 4.38 millimetres, respectively, across the test set; these compared favourably with measurement error in data. This constitutes a first robust prediction of the 3D spinal curve from external data using learning techniques.
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling
NASA Technical Reports Server (NTRS)
Fink, P. W.; Wilton, D. R.; Dobbins, J. A.
2002-01-01
In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required to obtain usable convergence from an iterative solver. The authors have examined the use of an Incomplete LU Threshold (ILUT) preconditioner . to solver linear systems stemming from higher order BEM/FEM formulations in 2D scattering problems. Although the resulting preconditioner provided aD excellent approximation to the system inverse, its size in terms of non-zero entries represented only a modest improvement when compared with the fill-in associated with a sparse direct solver. Furthermore, the fill-in of the preconditioner could not be substantially reduced without the occurrence of instabilities. In addition to the results for these 2D problems, the authors will present iterative solution data from the application of the ILUT preconditioner to 3D problems.
Staff Development Project--Mathematics. Grades K-6. Revision.
ERIC Educational Resources Information Center
Shaw, Jean M.; And Others
This manual was designed for use in conducting staff development sessions for elementary teachers of mathematics in Mississippi in grades K-6. The four topical areas treated in the document are: (1) measurement and geometry; (2) fractions; (3) procedural errors in arithmetic; and (4) problem solving. The number of instructional hours necessary for…
Modeling the influence of LASIK surgery on optical properties of the human eye
NASA Astrophysics Data System (ADS)
Szul-Pietrzak, Elżbieta; Hachoł, Andrzej; Cieślak, Krzysztof; Drożdż, Ryszard; Podbielska, Halina
2011-11-01
The aim was to model the influence of LASIK surgery on the optical parameters of the human eye and to ascertain which factors besides the central corneal radius of curvature and central thickness play the major role in postsurgical refractive change. Ten patients were included in the study. Pre- and postsurgical measurements included standard refraction, anterior corneal curvature and pachymetry. The optical model used in the analysis was based on the Le Grand and El Hage schematic eye, modified by the measured individual parameters of corneal geometry. A substantial difference between eye refractive error measured after LASIK and estimated from the eye model was observed. In three patients, full correction of the refractive error was achieved. However, analysis of the visual quality in terms of spot diagrams and optical transfer functions of the eye optical system revealed some differences in these measurements. This suggests that other factors besides corneal geometry may play a major role in postsurgical refraction. In this paper we investigated whether the biomechanical properties of the eyeball and changes in intraocular pressure could account for the observed discrepancies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Yassin; Anand, Nk
2016-03-30
A 1/16th scaled VHTR experimental model was constructed and the preliminary test was performed in this study. To produce benchmark data for CFD validation in the future, the facility was first run at partial operation with five pipes being heated. PIV was performed to extract the vector velocity field for three adjacent naturally convective jets at statistically steady state. A small recirculation zone was found between the pipes, and the jets entered the merging zone at 3 cm from the pipe outlet but diverged as the flow approached the top of the test geometry. Turbulence analysis shows the turbulence intensitymore » peaked at 41-45% as the jets mixed. A sensitivity analysis confirmed that 1000 frames were sufficient to measure statistically steady state. The results were then validated by extracting the flow rate from the PIV jet velocity profile, and comparing it with an analytic flow rate and ultrasonic flowmeter; all flow rates lie within the uncertainty of the other two methods for Tests 1 and 2. This test facility can be used for further analysis of naturally convective mixing, and eventually produce benchmark data for CFD validation for the VHTR during a PCC or DCC accident scenario. Next, a PTV study of 3000 images (1500 image pairs) were used to quantify the velocity field in the upper plenum. A sensitivity analysis confirmed that 1500 frames were sufficient to precisely estimate the flow. Subsequently, three (3, 9, and 15 cm) Y-lines from the pipe output were extracted to consider the output differences between 50 to 1500 frames. The average velocity field and standard deviation error that accrued in the three different tests were calculated to assess repeatability. The error was varied, from 1 to 14%, depending on Y-elevation. The error decreased as the flow moved farther from the output pipe. In addition, turbulent intensity was calculated and found to be high near the output. Reynolds stresses and turbulent intensity were used to validate the data by comparing it with benchmark data. The experimental data gave the same pattern as the benchmark data. A turbulent single buoyant jet study was performed for the case of LOFC in the upper plenum of scaled VHTR. Time-averaged profiles show that 3,000 frames of images were sufficient for the study up to second-order statistics. Self-similarity is an important feature of jets since the behavior of jets is independent of Reynolds number and a sole function of geometry. Self-similarity profiles were well observed in the axial velocity and velocity magnitude profile regardless of z/D where the radial velocity did not show any similarity pattern. The normal components of Reynolds stresses have self-similarity within the expected range. The study shows that large vortices were observed close to the dome wall, indicating that the geometry of the VHTR has a significant impact on its safety and performance. Near the dome surface, large vortices were shown to inhibit the flows, resulting in reduced axial jet velocity. The vortices that develop subsequently reduce the Reynolds stresses that develop and the impact on the integrity of the VHTR upper plenum surface. Multiple jets study, including two, three and five jets, were investigated.« less
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank
2016-10-01
Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.
Online measurement of bead geometry in GMAW-based additive manufacturing using passive vision
NASA Astrophysics Data System (ADS)
Xiong, Jun; Zhang, Guangjun
2013-11-01
Additive manufacturing based on gas metal arc welding is an advanced technique for depositing fully dense components with low cost. Despite this fact, techniques to achieve accurate control and automation of the process have not yet been perfectly developed. The online measurement of the deposited bead geometry is a key problem for reliable control. In this work a passive vision-sensing system, comprising two cameras and composite filtering techniques, was proposed for real-time detection of the bead height and width through deposition of thin walls. The nozzle to the top surface distance was monitored for eliminating accumulated height errors during the multi-layer deposition process. Various image processing algorithms were applied and discussed for extracting feature parameters. A calibration procedure was presented for the monitoring system. Validation experiments confirmed the effectiveness of the online measurement system for bead geometry in layered additive manufacturing.
Chen, Gang; Xu, Zhengyuan; Ding, Haipeng; Sadler, Brian
2009-03-02
We consider outdoor non-line-of-sight deep ultraviolet (UV) solar blind communications at ranges up to 100 m, with different transmitter and receiver geometries. We propose an empirical channel path loss model, and fit the model based on extensive measurements. We observe range-dependent power decay with a power exponent that varies from 0.4 to 2.4 with varying geometry. We compare with the single scattering model, and show that the single scattering assumption leads to a model that is not accurate for small apex angles. Our model is then used to study fundamental communication system performance trade-offs among transmitted optical power, range, link geometry, data rate, and bit error rate. Both weak and strong solar background radiation scenarios are considered to bound detection performance. These results provide guidelines to system design.
SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chi, Y; Tian, Z; Jiang, S
Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized tomore » define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged particle transport in this geometry are currently in progress.« less
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
In search of periodic signatures in IGS REPRO1 solution
NASA Astrophysics Data System (ADS)
Mtamakaya, J. D.; Santos, M. C.; Craymer, M. R.
2010-12-01
We have been looking for periodic signatures in the REPRO1 solution recently released by the IGS. At this stage, a selected sub-set of IGS station time series in position and residual domain are under harmonic analysis. We can learn different things from this analysis. From the position domain, we can learn more about actual station motions. From the residual domain, we can learn more about mis-modelled or un-modelled errors. As far as error sources are concerned, we have investigated effects that may be due to tides, atmospheric loading, definition of the position of the figure axis and GPS constellation geometry. This poster presents and discusses our findings and presents insights on errors that need to be modelled or have their models improved.
Adaptive radial basis function mesh deformation using data reduction
NASA Astrophysics Data System (ADS)
Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.
2016-09-01
Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.
Beier, Susann; Ormiston, John; Webster, Mark; Cater, John; Norris, Stuart; Medrano-Gracia, Pau; Young, Alistair; Gilbert, Kathleen; Cowan, Brett
2016-08-01
The majority of patients with angina or heart failure have coronary artery disease. Left main bifurcations are particularly susceptible to pathological narrowing. Flow is a major factor of atheroma development, but limitations in imaging technology such as spatio-temporal resolution, signal-to-noise ratio (SNRv), and imaging artefacts prevent in vivo investigations. Computational fluid dynamics (CFD) modelling is a common numerical approach to study flow, but it requires a cautious and rigorous application for meaningful results. Left main bifurcation angles of 40°, 80° and 110° were found to represent the spread of an atlas based 100 computed tomography angiograms. Three left mains with these bifurcation angles were reconstructed with 1) idealized, 2) stented, and 3) patient-specific geometry. These were then approximately 7× scaled-up and 3D printing as large phantoms. Their flow was reproduced using a blood-analogous, dynamically scaled steady flow circuit, enabling in vitro phase-contrast magnetic resonance (PC-MRI) measurements. After threshold segmentation the image data was registered to true-scale CFD of the same coronary geometry using a coherent point drift algorithm, yielding a small covariance error (σ 2 <;5.8×10 -4 ). Natural-neighbour interpolation of the CFD data onto the PC-MRI grid enabled direct flow field comparison, showing very good agreement in magnitude (error 2-12%) and directional changes (r 2 0.87-0.91), and stent induced flow alternations were measureable for the first time. PC-MRI over-estimated velocities close to the wall, possibly due to partial voluming. Bifurcation shape determined the development of slow flow regions, which created lower SNRv regions and increased discrepancies. These can likely be minimised in future by testing different similarity parameters to reduce acquisition error and improve correlation further. It was demonstrated that in vitro large phantom acquisition correlates to true-scale coronary flow simulations when dynamically scaled, and thus can overcome current PC-MRI's spatio-temporal limitations. This novel method enables experimental assessment of stent induced flow alternations, and in future may elevate CFD coronary flow simulations by providing sophisticated boundary conditions, and enable investigations of stenosis phantoms.
Rubin, D.M.
1992-01-01
Forecasting of one-dimensional time series previously has been used to help distinguish periodicity, chaos, and noise. This paper presents two-dimensional generalizations for making such distinctions for spatial patterns. The techniques are evaluated using synthetic spatial patterns and then are applied to a natural example: ripples formed in sand by blowing wind. Tests with the synthetic patterns demonstrate that the forecasting techniques can be applied to two-dimensional spatial patterns, with the same utility and limitations as when applied to one-dimensional time series. One limitation is that some combinations of periodicity and randomness exhibit forecasting signatures that mimic those of chaos. For example, sine waves distorted with correlated phase noise have forecasting errors that increase with forecasting distance, errors that, are minimized using nonlinear models at moderate embedding dimensions, and forecasting properties that differ significantly between the original and surrogates. Ripples formed in sand by flowing air or water typically vary in geometry from one to another, even when formed in a flow that is uniform on a large scale; each ripple modifies the local flow or sand-transport field, thereby influencing the geometry of the next ripple downcurrent. Spatial forecasting was used to evaluate the hypothesis that such a deterministic process - rather than randomness or quasiperiodicity - is responsible for the variation between successive ripples. This hypothesis is supported by a forecasting error that increases with forecasting distance, a greater accuracy of nonlinear relative to linear models, and significant differences between forecasts made with the original ripples and those made with surrogate patterns. Forecasting signatures cannot be used to distinguish ripple geometry from sine waves with correlated phase noise, but this kind of structure can be ruled out by two geometric properties of the ripples: Successive ripples are highly correlated in wavelength, and ripple crests display dislocations such as branchings and mergers. ?? 1992 American Institute of Physics.
Szymanski, Eric S; Kimsey, Isaac J; Al-Hashimi, Hashim M
2017-03-29
The replicative and translational machinery utilizes the unique geometry of canonical G·C and A·T/U Watson-Crick base pairs to discriminate against DNA and RNA mismatches in order to ensure high fidelity replication, transcription, and translation. There is growing evidence that spontaneous errors occur when mismatches adopt a Watson-Crick-like geometry through tautomerization and/or ionization of the bases. Studies employing NMR relaxation dispersion recently showed that wobble dG·dT and rG·rU mismatches in DNA and RNA duplexes transiently form tautomeric and anionic species with probabilities (≈0.01-0.40%) that are in concordance with replicative and translational errors. Although computational studies indicate that these exceptionally short-lived and low-abundance species form Watson-Crick-like base pairs, their conformation could not be directly deduced from the experimental data, and alternative pairing geometries could not be ruled out. Here, we report direct NMR evidence that the transient tautomeric and anionic species form hydrogen-bonded Watson-Crick-like base pairs. A guanine-to-inosine substitution, which selectively knocks out a Watson-Crick-type (G)N2H 2 ···O2(T) hydrogen bond, significantly destabilized the transient tautomeric and anionic species, as assessed by lack of any detectable chemical exchange by imino nitrogen rotating frame spin relaxation (R 1ρ ) experiments. An 15 N R 1ρ NMR experiment targeting the amino nitrogen of guanine (dG-N2) provides direct evidence for Watson-Crick (G)N2H 2 ···O2(T) hydrogen bonding in the transient tautomeric state. The strategy presented in this work can be generally applied to examine hydrogen-bonding patterns in nucleic acid transient states including in other tautomeric and anionic species that are postulated to play roles in replication and translational errors.
Using warnings to reduce categorical false memories in younger and older adults.
Carmichael, Anna M; Gutchess, Angela H
2016-07-01
Warnings about memory errors can reduce their incidence, although past work has largely focused on associative memory errors. The current study sought to explore whether warnings could be tailored to specifically reduce false recall of categorical information in both younger and older populations. Before encoding word pairs designed to induce categorical false memories, half of the younger and older participants were warned to avoid committing these types of memory errors. Older adults who received a warning committed fewer categorical memory errors, as well as other types of semantic memory errors, than those who did not receive a warning. In contrast, young adults' memory errors did not differ for the warning versus no-warning groups. Our findings provide evidence for the effectiveness of warnings at reducing categorical memory errors in older adults, perhaps by supporting source monitoring, reduction in reliance on gist traces, or through effective metacognitive strategies.
Syed, Faisal F; Rangu, Venu; Bruce, Charles J; Johnson, Susan B; Danielsen, Andrew; Gilles, Emily J; Ladewig, Dorothy J; Mikell, Susan B; Berhow, Steven; Wahnschaffe, Douglas; Suddendorf, Scott H; Asirvatham, Samuel J; Friedman, Paul A
2015-03-01
Debulking of electrically active atrial tissue may reduce the mass of fibrillating tissue during atrial fibrillation, eliminate triggers, and promote maintenance of normal sinus rhythm (NSR). We investigated whether left atrial appendage (LAA) ligation results in modification of atrial electrical substrate. Healthy male mongrel dogs (N = 20) underwent percutaneous epicardial LAA ligation. The ligation system grabber recorded LAA local electrograms (EGM) continuously before, during, and after closure. Successful ligation with a preloaded looped suture was confirmed intraprocedurally by LAA Doppler flow cessation on transesophageal echocardiography (TEE) and loss of LAA electrical activity, and after procedure by direct necropsic visualization. P-wave duration on surface electrocardiograms was measured immediately before and after LAA closure. Percent P-wave duration reduction was correlated with preclosure LAA internal dimensions measured by TEE and external dimensions measured on necropsy specimens to investigate associations of LAA geometry with the extent of electrical substrate modification. LAA ligation was successful in all dogs and accompanied by loss of LAA EGM. P-wave duration reduced immediately on ligation (mean 75 ms preligation to 63 ms postligation; mean difference ± standard error, 12 ± 1 ms; P < 0.0001). Percent P-wave reduction was associated with larger LAA longitudinal cross-sectional area (R(2) = 0.263, P = 0.04) and smaller external circumference (R(2) = 0.687, P = 0.04). All dogs were in sinus rhythm. Percutaneous LAA ligation results in its acute electrical isolation and atrial electrical substrate modification, the degree of which is associated with LAA geometry. These electrical changes raise the possibility that LAA ligation may promote NSR by removing LAA substrate and triggers. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gao, F.; Zhang, Y.
2017-12-01
A new inverse method is developed to simultaneously estimate aquifer thickness and boundary conditions using borehole and hydrodynamic measurements from a homogeneous confined aquifer under steady-state ambient flow. This method extends a previous groundwater inversion technique which had assumed known aquifer geometry and thickness. In this research, thickness inversion was successfully demonstrated when hydrodynamic data were supplemented with measured thicknesses from boreholes. Based on a set of hybrid formulations which describe approximate solutions to the groundwater flow equation, the new inversion technique can incorporate noisy observed data (i.e., thicknesses, hydraulic heads, Darcy fluxes or flow rates) at measurement locations as a set of conditioning constraints. Given sufficient quantity and quality of the measurements, the inverse method yields a single well-posed system of equations that can be solved efficiently with nonlinear optimization. The method is successfully tested on two-dimensional synthetic aquifer problems with regular geometries. The solution is stable when measurement errors are increased, with error magnitude reaching up to +/- 10% of the range of the respective measurement. When error-free observed data are used to condition the inversion, the estimated thickness is within a +/- 5% error envelope surrounding the true value; when data contain increasing errors, the estimated thickness become less accurate, as expected. Different combinations of measurement types are then investigated to evaluate data worth. Thickness can be inverted with the combination of observed heads and at least one of the other types of observations such as thickness, Darcy fluxes, or flow rates. Data requirement of the new inversion method is thus not much different from that of interpreting classic well tests. Future work will improve upon this research by developing an estimation strategy for heterogeneous aquifers while drawdown data from hydraulic tests will also be incorporated as conditioning measurements.
Design Automation Using Script Languages. High-Level CAD Templates in Non-Parametric Programs
NASA Astrophysics Data System (ADS)
Moreno, R.; Bazán, A. M.
2017-10-01
The main purpose of this work is to study the advantages offered by the application of traditional techniques of technical drawing in processes for automation of the design, with non-parametric CAD programs, provided with scripting languages. Given that an example drawing can be solved with traditional step-by-step detailed procedures, is possible to do the same with CAD applications and to generalize it later, incorporating references. In today’s modern CAD applications, there are striking absences of solutions for building engineering: oblique projections (military and cavalier), 3D modelling of complex stairs, roofs, furniture, and so on. The use of geometric references (using variables in script languages) and their incorporation into high-level CAD templates allows the automation of processes. Instead of repeatedly creating similar designs or modifying their data, users should be able to use these templates to generate future variations of the same design. This paper presents the automation process of several complex drawing examples based on CAD script files aided with parametric geometry calculation tools. The proposed method allows us to solve complex geometry designs not currently incorporated in the current CAD applications and to subsequently create other new derivatives without user intervention. Automation in the generation of complex designs not only saves time but also increases the quality of the presentations and reduces the possibility of human errors.
Error analysis of motion correction method for laser scanning of moving objects
NASA Astrophysics Data System (ADS)
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
NASA Technical Reports Server (NTRS)
Sapp, Clyde A.; See, Thomas H.; Zolensky, Michael E.
1992-01-01
During the 3 month deintegration of the LDEF, the M&D SIG generated approximately 5000 digital color stereo image pairs of impact related features from all space exposed surfaces. Currently, these images are being processed at JSC to yield more accurate feature information. Work is currently underway to determine the minimum number of data points necessary to parametrically define impact crater morphologies in order to minimize the man-hour intensive task of tie point selection. Initial attempts at deriving accurate crater depth and diameter measurements from binocular imagery were based on the assumption that the crater geometries were best defined by paraboloid. We made no assumptions regarding the crater depth/diameter ratios but instead allowed each crater to define its own coefficients by performing a least-squares fit based on user-selected tiepoints. Initial test cases resulted in larger errors than desired, so it was decided to test our basic assumptions that the crater geometries could be parametrically defined as paraboloids. The method for testing this assumption was to carefully slice test craters (experimentally produced in an appropriate aluminum alloy) vertically through the center resulting in a readily visible cross-section of the crater geometry. Initially, five separate craters were cross-sectioned in this fashion. A digital image of each cross-section was then created, and the 2-D crater geometry was then hand-digitized to create a table of XY position for each crater. A 2nd order polynomial (parabolic) was fitted to the data using a least-squares approach. The differences between the fit equation and the actual data were fairly significant, and easily large enough to account for the errors found in the 3-D fits. The differences between the curve fit and the actual data were consistent between the caters. This consistency suggested that the differences were due to the fact that a parabola did not sufficiently define the generic crater geometry. Fourth and 6th order equations were then fitted to each crater cross-section, and significantly better estimates of the crater geometry were obtained with each fit. Work is presently underway to determine the best way to make use of this new parametric crater definition.
Mistake proofing: changing designs to reduce error
Grout, J R
2006-01-01
Mistake proofing uses changes in the physical design of processes to reduce human error. It can be used to change designs in ways that prevent errors from occurring, to detect errors after they occur but before harm occurs, to allow processes to fail safely, or to alter the work environment to reduce the chance of errors. Effective mistake proofing design changes should initially be effective in reducing harm, be inexpensive, and easily implemented. Over time these design changes should make life easier and speed up the process. Ideally, the design changes should increase patients' and visitors' understanding of the process. These designs should themselves be mistake proofed and follow the good design practices of other disciplines. PMID:17142609
Effect of squeeze film damper land geometry on damper performance
NASA Astrophysics Data System (ADS)
Wang, Y. H.; Hahn, E. J.
1994-04-01
Variable axial land geometry dampers can significantly alter the unbalance response, and in particular, the likelihood of undesirable jump behavior, or circular orbit-type squeeze film dampers. Assuming end feed, the pressure distribution, the fluid film forces, and the stiffness and damping coefficients are obtained for such variable axial and geometry dampers, as well as the jump-up propensity for vertical squeeze film damped rigid rotors. It is shown that variable land geometry dampers can reduce the variation of stiffness and damping coefficients, thereby reducing the degree of damper force non-linearity, and presumably reducing the likelihood of undesirable bistable operation. However, it is also found that regardless of unbalance and regardless of the depth, width or shape of the profile, parallel land dampers are least likely to experience jump-up to undesirable operation modes. These conflicting conclusions may be accounted for by the reduction in damping. They will need to be qualified for practical dampers which normally have oil hole feed rather than end feed.
A Dose of Reality: Radiation Analysis for Realistic Human Spacecraft
NASA Technical Reports Server (NTRS)
Barzilla, J. E.; Lee, K. T.
2017-01-01
INTRODUCTION As with most computational analyses, a tradeoff exists between problem complexity, resource availability and response accuracy when modeling radiation transport from the source to a detector. The largest amount of analyst time for setting up an analysis is often spent ensuring that any simplifications made have minimal impact on the results. The vehicle shield geometry of interest is typically simplified from the original CAD design in order to reduce computation time, but this simplification requires the analyst to "re-draw" the geometry with a limited set of volumes in order to accommodate a specific radiation transport software package. The resulting low-fidelity geometry model cannot be shared with or compared to other radiation transport software packages, and the process can be error prone with increased model complexity. The work presented here demonstrates the use of the DAGMC (Direct Accelerated Geometry for Monte Carlo) Toolkit from the University of Wisconsin, to model the impacts of several space radiation sources on a CAD drawing of the US Lab module. METHODS The DAGMC toolkit workflow begins with the export of an existing CAD geometry from the native CAD to the ACIS format. The ACIS format file is then cleaned using SpaceClaim to remove small holes and component overlaps. Metadata is then assigned to the cleaned geometry file using CUBIT/Trelis from csimsoft (Registered Trademark). The DAGMC plugin script removes duplicate shared surfaces, facets the geometry to a specified tolerance, and ensures that the faceted geometry is water tight. This step also writes the material and scoring information to a standard input file format that the analyst can alter as desired prior to running the radiation transport program. The scoring results can be transformed, via python script, into a 3D format that is viewable in a standard graphics program. RESULTS The CAD model of the US Lab module of the International Space Station, inclusive of all the racks and components, was simplified to remove holes and volume overlaps. Problematic features within the drawing were also removed or repaired to prevent runtime issues. The cleaned drawing was then run through the DAGMC workflow to prepare for analysis. Pilot tests modeling transport of 1GeV proton and 800MeV/A oxygen sources show that reasonable results are converged upon in an acceptable amount of overall computation time from drawing preparation to data analysis. The FLUKA radiation transport code will next be used to model both a GCR and a trapped radiation source. These results will then be compared with measurements that have been made by the radiation instrumentation deployed inside the US Lab module. DISCUSSION Early analyses have indicated that the DAGMC workflow is a promising toolkit for running vehicle geometries of interest to NASA through multiple radiation transport codes. In addition, recent work has shown that a realistic human phantom, provided via a subcontract with the University of Florida, can be placed inside any vehicle geometry for a combinatorial analysis. This added functionality gives the user the ability to score various parameters at the organ level, and the results can then be used as input for cancer risk models.
The slab geometry laser. I - Theory
NASA Technical Reports Server (NTRS)
Eggleston, J. M.; Kane, T. J.; Kuhn, K.; Byer, R. L.; Unternahrer, J.
1984-01-01
Slab geometry solid-state lasers offer significant performance improvements over conventional rod-geometry lasers. A detailed theoretical description of the thermal, stress, and beam-propagation characteristics of a slab laser is presented. The analysis includes consideration of the effects of the zig-zag optical path, which eliminates thermal and stress focusing and reduces residual birefringence.
Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa
2013-01-01
To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.
Circular electrode geometry metal-semiconductor-metal photodetectors
NASA Technical Reports Server (NTRS)
Mcaddo, James A. (Inventor); Towe, Elias (Inventor); Bishop, William L. (Inventor); Wang, Liang-Guo (Inventor)
1994-01-01
The invention comprises a high speed, metal-semiconductor-metal photodetector which comprises a pair of generally circular, electrically conductive electrodes formed on an optically active semiconductor layer. Various embodiments of the invention include a spiral, intercoiled electrode geometry and an electrode geometry comprised of substantially circular, concentric electrodes which are interposed. These electrode geometries result in photodetectors with lower capacitances, dark currents and lower inductance which reduces the ringing seen in the optical pulse response.
NASA Astrophysics Data System (ADS)
Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc
2016-02-01
The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.
Error and uncertainty in Raman thermal conductivity measurements
Thomas Edwin Beechem; Yates, Luke; Graham, Samuel
2015-04-22
We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Automated drug dispensing system reduces medication errors in an intensive care setting.
Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick
2010-12-01
We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.
Ferrero, Alejandro; Rabal, Ana; Campos, Joaquín; Martínez-Verdú, Francisco; Chorro, Elísabet; Perales, Esther; Pons, Alicia; Hernanz, María Luisa
2013-02-01
A reduced set of measurement geometries allows the spectral reflectance of special effect coatings to be predicted for any other geometry. A physical model based on flake-related parameters has been used to determine nonredundant measurement geometries for the complete description of the spectral bidirectional reflectance distribution function (BRDF). The analysis of experimental spectral BRDF was carried out by means of principal component analysis. From this analysis, a set of nine measurement geometries was proposed to characterize special effect coatings. It was shown that, for two different special effect coatings, these geometries provide a good prediction of their complete color shift.
Determining relative error bounds for the CVBEM
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.
Pupil geometry and pupil re-imaging in telescope arrays
NASA Technical Reports Server (NTRS)
Traub, Wesley A.
1990-01-01
This paper considers the issues of lateral and longitudinal pupil geometry in ground-based telescope arrays, such as IOTA. In particular, it is considered whether or not pupil re-imaging is required before beam combination. By considering the paths of rays through the system, an expression is derived for the optical path errors in the combined wavefront as a function of array dimensions, telescope magnification factor, viewing angle, and field-of-view. By examining this expression for the two cases of pupil-plane and image-plane combination, operational limits can be found for any array. As a particular example, it is shown that for IOTA no pupil re-imaging optics will be needed.
Reduction of Orifice-Induced Pressure Errors
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.
1987-01-01
Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.
Reducing errors benefits the field-based learning of a fundamental movement skill in children.
Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W
2013-03-01
Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
Spatial calibration of an optical see-through head mounted display
Gilson, Stuart J.; Fitzgibbon, Andrew W.; Glennerster, Andrew
2010-01-01
We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry. PMID:18599125
Determination of the spectral behaviour of atmospheric soot using different particle models
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof
2017-08-01
In the atmosphere, black carbon aggregates interact with both organic and inorganic matter. In many studies they are modeled using different, less complex, geometries. However, some common simplification might lead to many inaccuracies in the following light scattering simulations. The goal of this study was to compare the spectral behavior of different, commonly used soot particle models. For light scattering simulations, in the visible spectrum, the ADDA algorithm was used. The results prove that the relative extinction error δCext, in some cases, can be unexpectedly large. Therefore, before starting excessive simulations, it is important to know what error might occur.
2010-08-31
not defined. Figure 5.9: Run 10-Schlieren image with only the laser-induced air-breakdown glow visible. (M=8.77, T∞=68.7 K , P∞=0.15 kPa...Run #13-Laser induced blast wave interaction with oblique shock. (M-5.95, T∞=263.7 K , P∞=5.62 kPa, Ep=196±20 J) ................ Error! Bookmark not...the air-breakdown geometry. (M-5.95, T∞=262.3 K , P∞=5.16 kPa, Ep=176±18 J)Error! Bookmark not defined. Figure 5.13: Run#16 - Laser induced blast
Errors as a Means of Reducing Impulsive Food Choice.
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-06-05
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.
Errors as a Means of Reducing Impulsive Food Choice
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-01-01
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities. PMID:27341281
NASA Technical Reports Server (NTRS)
Stoll, John C.
1995-01-01
The performance of an unaided attitude determination system based on GPS interferometry is examined using linear covariance analysis. The modelled system includes four GPS antennae onboard a gravity gradient stabilized spacecraft, specifically the Air Force's RADCAL satellite. The principal error sources are identified and modelled. The optimal system's sensitivities to these error sources are examined through an error budget and by varying system parameters. The effects of two satellite selection algorithms, Geometric and Attitude Dilution of Precision (GDOP and ADOP, respectively) are examined. The attitude performance of two optimal-suboptimal filters is also presented. Based on this analysis, the limiting factors in attitude accuracy are the knowledge of the relative antenna locations, the electrical path lengths from the antennae to the receiver, and the multipath environment. The performance of the system is found to be fairly insensitive to torque errors, orbital inclination, and the two satellite geometry figures-of-merit tested.
Simplified model of pinhole imaging for quantifying systematic errors in image shape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, Laura Robin; Izumi, N.; Khan, S. F.
In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less
Study of run time errors of the ATLAS pixel detector in the 2012 data taking period
NASA Astrophysics Data System (ADS)
Gandrajula, Reddy Pratap
The high resolution silicon Pixel detector is critical in event vertex reconstruction and in particle track reconstruction in the ATLAS detector. During the pixel data taking operation, some modules (Silicon Pixel sensor +Front End Chip+ Module Control Chip (MCC)) go to an auto-disable state, where the Modules don't send the data for storage. Modules become operational again after reconfiguration. The source of the problem is not fully understood. One possible source of the problem is traced to the occurrence of single event upset (SEU) in the MCC. Such a module goes to either a Timeout or Busy state. This report is the study of different types and rates of errors occurring in the Pixel data taking operation. Also, the study includes the error rate dependency on Pixel detector geometry.
Simplified model of pinhole imaging for quantifying systematic errors in image shape
Benedetti, Laura Robin; Izumi, N.; Khan, S. F.; ...
2017-10-30
In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less
The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.
Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik
2014-11-11
Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.
Unforced errors and error reduction in tennis
Brody, H
2006-01-01
Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568
Three-dimensional modeling of the cochlea by use of an arc fitting approach.
Schurzig, Daniel; Lexow, G Jakob; Majdani, Omid; Lenarz, Thomas; Rau, Thomas S
2016-12-01
A cochlea modeling approach is presented allowing for a user defined degree of geometry simplification which automatically adjusts to the patient specific anatomy. Model generation can be performed in a straightforward manner due to error estimation prior to the actual generation, thus minimizing modeling time. Therefore, the presented technique is well suited for a wide range of applications including finite element analyses where geometrical simplifications are often inevitable. The method is presented for n=5 cochleae which were segmented using a custom software for increased accuracy. The linear basilar membrane cross sections are expanded to areas while the scalae contours are reconstructed by a predefined number of arc segments. Prior to model generation, geometrical errors are evaluated locally for each cross section as well as globally for the resulting models and their basal turn profiles. The final combination of all reconditioned features to a 3D volume is performed in Autodesk Inventor using the loft feature. Due to the volume generation based on cubic splines, low errors could be achieved even for low numbers of arc segments and provided cross sections, both of which correspond to a strong degree of model simplification. Model generation could be performed in a time efficient manner. The proposed simplification method was proven to be well suited for the helical cochlea geometry. The generated output data can be imported into commercial software tools for various analyses representing a time efficient way to create cochlea models optimally suited for the desired task.
Quantitative evaluation of performance of three-dimensional printed lenses
NASA Astrophysics Data System (ADS)
Gawedzinski, John; Pawlowski, Michal E.; Tkaczyk, Tomasz S.
2017-08-01
We present an analysis of the shape, surface quality, and imaging capabilities of custom three-dimensional (3-D) printed lenses. 3-D printing technology enables lens prototypes to be fabricated without restrictions on surface geometry. Thus, spherical, aspherical, and rotationally nonsymmetric lenses can be manufactured in an integrated production process. This technique serves as a noteworthy alternative to multistage, labor-intensive, abrasive processes, such as grinding, polishing, and diamond turning. Here, we evaluate the quality of lenses fabricated by Luxexcel using patented Printoptical©; technology that is based on an inkjet printing technique by comparing them to lenses made with traditional glass processing technologies (grinding, polishing, etc.). The surface geometry and roughness of the lenses were evaluated using white-light and Fizeau interferometers. We have compared peak-to-valley wavefront deviation, root mean square (RMS) wavefront error, radii of curvature, and the arithmetic roughness average (Ra) profile of plastic and glass lenses. In addition, the imaging performance of selected pairs of lenses was tested using 1951 USAF resolution target. The results indicate performance of 3-D printed optics that could be manufactured with surface roughness comparable to that of injection molded lenses (Ra<20 nm). The RMS wavefront error of 3-D printed prototypes was at a minimum 18.8 times larger than equivalent glass prototypes for a lens with a 12.7 mm clear aperture, but, when measured within 63% of its clear aperture, the 3-D printed components' RMS wavefront error was comparable to glass lenses.
Quantitative evaluation of performance of 3D printed lenses
Gawedzinski, John; Pawlowski, Michal E.; Tkaczyk, Tomasz S.
2017-01-01
We present an analysis of the shape, surface quality, and imaging capabilities of custom 3D printed lenses. 3D printing technology enables lens prototypes to be fabricated without restrictions on surface geometry. Thus, spherical, aspherical and rotationally non-symmetric lenses can be manufactured in an integrated production process. This technique serves as a noteworthy alternative to multistage, labor-intensive, abrasive processes such as grinding, polishing and diamond turning. Here, we evaluate the quality of lenses fabricated by Luxexcel using patented Printoptical© technology that is based on an inkjet printing technique by comparing them to lenses made with traditional glass processing technologies (grinding, polishing etc.). The surface geometry and roughness of the lenses were evaluated using white-light and Fizeau interferometers. We have compared peak-to-valley wavefront deviation, root-mean-squared wavefront error, radii of curvature and the arithmetic average of the roughness profile (Ra) of plastic and glass lenses. Additionally, the imaging performance of selected pairs of lenses was tested using 1951 USAF resolution target. The results indicate performance of 3D printed optics that could be manufactured with surface roughness comparable to that of injection molded lenses (Ra < 20 nm). The RMS wavefront error of 3D printed prototypes was at a minimum 18.8 times larger than equivalent glass prototypes for a lens with a 12.7 mm clear aperture, but when measured within 63% of its clear aperture, 3D printed components’ RMS wavefront error was comparable to glass lenses. PMID:29238114
Load Sharing Behavior of Star Gearing Reducer for Geared Turbofan Engine
NASA Astrophysics Data System (ADS)
Mo, Shuai; Zhang, Yidu; Wu, Qiong; Wang, Feiming; Matsumura, Shigeki; Houjoh, Haruo
2017-07-01
Load sharing behavior is very important for power-split gearing system, star gearing reducer as a new type and special transmission system can be used in many industry fields. However, there is few literature regarding the key multiple-split load sharing issue in main gearbox used in new type geared turbofan engine. Further mechanism analysis are made on load sharing behavior among star gears of star gearing reducer for geared turbofan engine. Comprehensive meshing error analysis are conducted on eccentricity error, gear thickness error, base pitch error, assembly error, and bearing error of star gearing reducer respectively. Floating meshing error resulting from meshing clearance variation caused by the simultaneous floating of sun gear and annular gear are taken into account. A refined mathematical model for load sharing coefficient calculation is established in consideration of different meshing stiffness and supporting stiffness for components. The regular curves of load sharing coefficient under the influence of interactions, single action and single variation of various component errors are obtained. The accurate sensitivity of load sharing coefficient toward different errors is mastered. The load sharing coefficient of star gearing reducer is 1.033 and the maximum meshing force in gear tooth is about 3010 N. This paper provides scientific theory evidences for optimal parameter design and proper tolerance distribution in advanced development and manufacturing process, so as to achieve optimal effects in economy and technology.
Chen, Yang; Young, Paul M; Murphy, Seamus; Fletcher, David F; Long, Edward; Lewis, David; Church, Tanya; Traini, Daniela
2017-04-01
The aim of this study is to investigate aerosol plume geometries of pressurised metered dose inhalers (pMDIs) using a high-speed laser image system with different actuator nozzle materials and designs. Actuators made from aluminium, PET and PTFE were manufactured with four different nozzle designs: cone, flat, curved cone and curved flat. Plume angles and spans generated using the designed actuator nozzles with four solution-based pMDI formulations were imaged using Oxford Lasers EnVision system and analysed using EnVision Patternate software. Reduced plume angles for all actuator materials and nozzle designs were observed with pMDI formulations containing drug with high co-solvent concentration (ethanol) due to the reduced vapour pressure. Significantly higher plume angles were observed with the PTFE flat nozzle across all formulations, which could be a result of the nozzle geometry and material's hydrophobicity. The plume geometry of pMDI aerosols can be influenced by the vapour pressure of the formulation, nozzle geometries and actuator material physiochemical properties.
Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.
Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2016-01-01
Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.
Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.
Pathak, Biswajit; Boruah, Bosanta R
2017-12-01
Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.
An educational and audit tool to reduce prescribing error in intensive care.
Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D
2008-10-01
To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.
Optimum Design of Forging Process Parameters and Preform Shape under Uncertainties
NASA Astrophysics Data System (ADS)
Repalle, Jalaja; Grandhi, Ramana V.
2004-06-01
Forging is a highly complex non-linear process that is vulnerable to various uncertainties, such as variations in billet geometry, die temperature, material properties, workpiece and forging equipment positional errors and process parameters. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion and production risk. Identifying the sources of uncertainties, quantifying and controlling them will reduce risk in the manufacturing environment, which will minimize the overall cost of production. In this paper, various uncertainties that affect forging tool life and preform design are identified, and their cumulative effect on the forging process is evaluated. Since the forging process simulation is computationally intensive, the response surface approach is used to reduce time by establishing a relationship between the system performance and the critical process design parameters. Variability in system performance due to randomness in the parameters is computed by applying Monte Carlo Simulations (MCS) on generated Response Surface Models (RSM). Finally, a Robust Methodology is developed to optimize forging process parameters and preform shape. The developed method is demonstrated by applying it to an axisymmetric H-cross section disk forging to improve the product quality and robustness.
Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits.
Gámez Serna, Citlalli; Ruichek, Yassine
2017-06-14
A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle's speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, 'curve analysis extraction' and 'speed limits database creation' are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger.
Reduced-Order Direct Numerical Simulation of Solute Transport in Porous Media
NASA Astrophysics Data System (ADS)
Mehmani, Yashar; Tchelepi, Hamdi
2017-11-01
Pore-scale models are an important tool for analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Current direct numerical simulation (DNS) techniques, while very accurate, are computationally prohibitive for sample sizes that are statistically representative of the porous structure. Reduced-order approaches such as pore-network models (PNM) aim to approximate the pore-space geometry and physics to remedy this problem. Predictions from current techniques, however, have not always been successful. This work focuses on single-phase transport of a passive solute under advection-dominated regimes and delineates the minimum set of approximations that consistently produce accurate PNM predictions. Novel network extraction (discretization) and particle simulation techniques are developed and compared to high-fidelity DNS simulations for a wide range of micromodel heterogeneities and a single sphere pack. Moreover, common modeling assumptions in the literature are analyzed and shown that they can lead to first-order errors under advection-dominated regimes. This work has implications for optimizing material design and operations in manufactured (electrodes) and natural (rocks) porous media pertaining to energy systems. This work was supported by the Stanford University Petroleum Research Institute for Reservoir Simulation (SUPRI-B).
ERIC Educational Resources Information Center
Boedigheimer, Dan
2010-01-01
Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…
Reducing diagnostic errors in medicine: what's the goal?
Graber, Mark; Gordon, Ruthanna; Franklin, Nancy
2002-10-01
This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.
NASA Astrophysics Data System (ADS)
Foley, Jonathan J.; Mazziotti, David A.
2010-10-01
An efficient method for geometry optimization based on solving the anti-Hermitian contracted Schrödinger equation (ACSE) is presented. We formulate a reduced version of the Hellmann-Feynman theorem (HFT) in terms of the two-electron reduced Hamiltonian operator and the two-electron reduced density matrix (2-RDM). The HFT offers a considerable reduction in computational cost over methods which rely on numerical derivatives. While previous geometry optimizations with numerical gradients required 2M evaluations of the ACSE where M is the number of nuclear degrees of freedom, the HFT requires only a single ACSE calculation of the 2-RDM per gradient. Synthesizing geometry optimization techniques with recent extensions of the ACSE theory to arbitrary electronic and spin states provides an important suite of tools for accurately determining equilibrium and transition-state structures of ground- and excited-state molecules in closed- and open-shell configurations. The ability of the ACSE to balance single- and multi-reference correlation is particularly advantageous in the determination of excited-state geometries where the electronic configurations differ greatly from the ground-state reference. Applications are made to closed-shell molecules N2, CO, H2O, the open-shell molecules B2 and CH, and the excited state molecules N2, B2, and BH. We also study the HCN ↔ HNC isomerization and the geometry optimization of hydroxyurea, a molecule which has a significant role in the treatment of sickle-cell anaemia.
Remote sensing of channels and riparian zones with a narrow-beam aquatic-terrestrial LIDAR
Jim McKean; Dave Nagel; Daniele Tonina; Philip Bailey; Charles Wayne Wright; Carolyn Bohn; Amar Nayegandhi
2009-01-01
The high-resolution Experimental Advanced Airborne Research LIDAR (EAARL) is a new technology for cross-environment surveys of channels and floodplains. EAARL measurements of basic channel geometry, such as wetted cross-sectional area, are within a few percent of those from control field surveys. The largest channel mapping errors are along stream banks. The LIDAR data...
NASA Astrophysics Data System (ADS)
Shneider, Mikhail N.
2017-10-01
The ponderomotive perturbation in the interaction region of laser radiation with a low density and low-temperature plasma is considered. Estimates of the perturbation magnitude are determined from the plasma parameters, geometry, intensity, and wavelength of laser radiation. It is shown that ponderomotive perturbations can lead to large errors in the electron density when measured using Thomson scattering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorpe, J. I.; Livas, J.; Maghami, P.
Arm locking is a proposed laser frequency stabilization technique for the Laser Interferometer Space Antenna (LISA), a gravitational-wave observatory sensitive in the milliHertz frequency band. Arm locking takes advantage of the geometric stability of the triangular constellation of three spacecraft that compose LISA to provide a frequency reference with a stability in the LISA measurement band that exceeds that available from a standard reference such as an optical cavity or molecular absorption line. We have implemented a time-domain simulation of a Kalman-filter-based arm-locking system that includes the expected limiting noise sources as well as the effects of imperfect a priorimore » knowledge of the constellation geometry on which the design is based. We use the simulation to study aspects of the system performance that are difficult to capture in a steady-state frequency-domain analysis such as frequency pulling of the master laser due to errors in estimates of heterodyne frequency. We find that our implementation meets requirements on both the noise and dynamic range of the laser frequency with acceptable tolerances and that the design is sufficiently insensitive to errors in the estimated constellation geometry that the required performance can be maintained for the longest continuous measurement intervals expected for the LISA mission.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, T; Zhou, L; Li, Y
2015-06-15
Purpose: To develop a patient-specific rectal toxicity predictor guided plan quality control tool for prostate SBRT plans. Methods: For prostate SBRT cases, four segments of rectal walls including peri-prostatic anterior rectal wall, peri-prostatic lateral rectal walls, peri-prostatic posterior rectal wall and rectum superior to prostate are identified as organs at risk and the circumference of rectal wall receiving more than 39 Gy (CRW39) and 24 Gy (CRW24) are rectal toxicity predictors. In this new geometry-dosimetry model, a patient geometry descriptor, differential circumference of rectal wall (dCRW) is used as model input geometry parameters and plan dosimetric endpoints CRW39 and CRW24more » are output dosimetric parameters. Linear models are built to correlate dCRW to both CRW39 and CRW24 and established with both a linear regression method and a modified bagging ensemble machine learning method. 27 SBRT prostate cases are retrospectively studied from a dose-escalated clinical trial research. 20 prescribed 50 Gy SBRT cases are recruited to train the model and the other rescaled 7 cases are used to evaluated model feasibility and accuracy. Results: Each solved linear coefficient sequence related to CRW39 or CRW24 is a one-dimensional decreasing function of the distance from the PTV boundary, indicating that the different locations of each rectal circumference have different contributions to each particular dosimetric endpoint. The fitting errors for those trained 20 prostate SBRT cases are small with mean values of 2.39%, 2.45% relative to the endpoint values for SBRT rectal toxicity predictor CRW39 and CRW24 respectively. 1 out of 7 evaluation plans is identified as poor quality plan. After re-planning, the CRW39 and CRW24 can be reduced by 3.34% and 3%, without sacrificing PTV coverage. Conclusion: The proposed patient geometry-plan toxicity predictor model for SBRT plans can be successfully applied to plan quality control for prostate SBRT cases.« less
Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David
2012-08-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.
Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David
2012-01-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035
Worth Longest, P; Hindle, Michael; Das Choudhuri, Suparna
2009-06-01
For most newly developed spray aerosol inhalers, the generation time is a potentially important variable that can be fully controlled. The objective of this study was to determine the effects of spray aerosol generation time on transport and deposition in a standard induction port (IP) and more realistic mouth-throat (MT) geometry. Capillary aerosol generation (CAG) was selected as a representative system in which spray momentum was expected to significantly impact deposition. Sectional and total depositions in the IP and MT geometries were assessed at a constant CAG flow rate of 25 mg/sec for aerosol generation times of 1, 2, and 4 sec using both in vitro experiments and a previously developed computational fluid dynamics (CFD) model. Both the in vitro and numerical results indicated that extending the generation time of the spray aerosol, delivered at a constant mass flow rate, significantly reduced deposition in the IP and more realistic MT geometry. Specifically, increasing the generation time of the CAG system from 1 to 4 sec reduced the deposition fraction in the IP and MT geometries by approximately 60 and 33%, respectively. Furthermore, the CFD predictions of deposition fraction were found to be in good agreement with the in vitro results for all times considered in both the IP and MT geometries. The numerical results indicated that the reduction in deposition fraction over time was associated with temporal dissipation of what was termed the spray aerosol "burst effect." Based on these results, increasing the spray aerosol generation time, at a constant mass flow rate, may be an effective strategy for reducing deposition in the standard IP and in more realistic MT geometries.
Measurement of small lesions near metallic implants with mega-voltage cone beam CT
NASA Astrophysics Data System (ADS)
Grigorescu, Violeta; Prevrhal, Sven; Pouliot, Jean
2008-03-01
Metallic objects severely limit diagnostic CT imaging because of their high X-ray attenuation in the diagnostic energy range. In contrast, radiation therapy linear accelerators now offer CT imaging with X-ray energies in the megavolt range, where the attenuation coefficients of metals are significantly lower. We hypothesized that Mega electron-Voltage Cone-Beam CT (MVCT) implemented on a radiation therapy linear accelerator can detect and quantify small features in the vicinity of metallic implants with accuracy comparable to clinical Kilo electron-Voltage CT (KVCT) for imaging. Our test application was detection of osteolytic lesions formed near the metallic stem of a hip prosthesis, a condition of severe concern in hip replacement surgery. Both MVCT and KVCT were used to image a phantom containing simulated osteolytic bone lesions centered around a Chrome-Cobalt hip prosthesis stem with hemispherical lesions with sizes and densities ranging from 0.5 to 4 mm radius and 0 to 500 mg•cm -3, respectively. Images for both modalities were visually graded to establish lower limits of lesion visibility as a function of their size. Lesion volumes and mean density were determined and compared to reference values. Volume determination errors were reduced from 34%, on KVCT, to 20% for all lesions on MVCT, and density determination errors were reduced from 71% on KVCT to 10% on MVCT. Localization and quantification of lesions was improved with MVCT imaging. MVCT offers a viable alternative to clinical CT in cases where accurate 3D imaging of small features near metallic hardware is critical. These results need to be extended to other metallic objects of different composition and geometry.
TOPAS Tool for Particle Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perl, Joseph
2013-05-30
TOPAS lets users simulate the passage of subatomic particles moving through any kind of radiation therapy treatment system, can import a patient geometry, can record dose and other quantities, has advanced graphics, and is fully four-dimensional (3D plus time) to handle the most challenging time-dependent aspects of modern cancer treatments.TOPAS unlocks the power of the most accurate particle transport simulation technique, the Monte Carlo (MC) method, while removing the painstaking coding work such methods used to require. Research physicists can use TOPAS to improve delivery systems towards safer and more effective radiation therapy treatments, easily setting up and running complexmore » simulations that previously used to take months of preparation. Clinical physicists can use TOPAS to increase accuracy while reducing side effects, simulating patient-specific treatment plans at the touch of a button. TOPAS is designed as a user code layered on top of the Geant4 Simulation Toolkit. TOPAS includes the standard Geant4 toolkit, plus additional code to make Geant4 easier to control and to extend Geant4 functionality. TOPAS aims to make proton simulation both reliable and repeatable. Reliable means both accurate physics and a high likelihood to simulate precisely what the user intended to simulate, reducing issues of wrong units, wrong materials, wrong scoring locations, etc. Repeatable means not just getting the same result from one simulation to another, but being able to easily restore a previously used setup and reducing sources of error when a setup is passed from one user to another. TOPAS control system incorporates key lessons from safety management, proactively removing possible sources of user error such as line-ordering mistakes In control files. TOPAS has been used to model proton therapy treatment examples including the UCSF eye treatment head, the MGH stereotactic alignment in radiosurgery treatment head and the MGH gantry treatment heads in passive scattering and scanning modes, and has demonstrated dose calculation based on patient-specific CT data.« less
Cunha, A C; da Veiga, A M A; Masterson, D; Mattos, C T; Nojima, L I; Nojima, M C G; Maia, L C
2017-12-01
The aim of this systematic review and meta-analysis was to investigate how parameters related to geometry influence the clinical performance of orthodontic mini-implants (MIs). Systematic searches were performed in electronic databases including MEDLINE, Scopus, Web of Science, Virtual Health Library, and Cochrane Library and reference lists up to March 2016. Eligibility criteria comprised clinical studies involving patients who received MIs for orthodontic anchorage, with data for categories of MI dimension, shape, and thread design and insertion site, and evaluated by assessment of primary and secondary stability. Study selection, data extraction, quality assessment, and a meta-analysis were carried out. Twenty-seven studies were included in the qualitative synthesis: five randomized, eight prospective, and 14 retrospective clinical studies. One study with a serious risk of bias was later excluded. Medium and short MIs (1.4-1.9mm diameter and 5-8mm length) presented the highest success rates (0.87, 95% CI 0.80-0.92). A maximum insertion torque of 13.28Ncm (standard error 0.34) was observed for tapered self-drilling MIs in the mandible, whereas cylindrical MIs in the maxilla presented a maximum removal torque of 10.01Ncm (standard error 0.17). Moderate evidence indicates that the clinical performance of MIs is influenced by implant geometry parameters and is also related to properties of the insertion site. However, further research is necessary to support these associations. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Kafieh, Rahele; Shahamoradi, Mahdi; Hekmatian, Ehsan; Foroohandeh, Mehrdad; Emamidoost, Mostafa
2012-10-01
To carry out in vivo and in vitro comparative pilot study to evaluate the preciseness of a newly proposed digital dental radiography setup. This setup was based on markers placed on an external frame to eliminate the measurement errors due to incorrect geometry in relative positioning of cone, teeth and the sensor. Five patients with previous panoramic images were selected to undergo the proposed periapical digital imaging for in vivo phase. For in vitro phase, 40 extracted teeth were replanted in dry mandibular sockets and periapical digital images were prepared. The standard reference for real scales of the teeth were obtained through extracted teeth measurements for in vitro application and were calculated through panoramic imaging for in vivo phases. The proposed image processing thechnique was applied on periapical digital images to distinguish the incorrect geometry. The recognized error was inversely applied on the image and the modified images were compared to the correct values. The measurement findings after the distortion removal were compared to our gold standards (results of panoramic imaging or measurements from extracted teeth) and showed the accuracy of 96.45% through in vivo examinations and 96.0% through in vitro tests. The proposed distortion removal method is perfectly able to identify the possible inaccurate geometry during image acquisition and is capable of applying the inverse transform to the distorted radiograph to obtain the correctly modified image. This can be really helpful in applications like root canal therapy, implant surgical procedures and digital subtraction radiography, which are essentially dependent on precise measurements.
Kafieh, Rahele; Shahamoradi, Mahdi; Hekmatian, Ehsan; Foroohandeh, Mehrdad; Emamidoost, Mostafa
2012-01-01
To carry out in vivo and in vitro comparative pilot study to evaluate the preciseness of a newly proposed digital dental radiography setup. This setup was based on markers placed on an external frame to eliminate the measurement errors due to incorrect geometry in relative positioning of cone, teeth and the sensor. Five patients with previous panoramic images were selected to undergo the proposed periapical digital imaging for in vivo phase. For in vitro phase, 40 extracted teeth were replanted in dry mandibular sockets and periapical digital images were prepared. The standard reference for real scales of the teeth were obtained through extracted teeth measurements for in vitro application and were calculated through panoramic imaging for in vivo phases. The proposed image processing thechnique was applied on periapical digital images to distinguish the incorrect geometry. The recognized error was inversely applied on the image and the modified images were compared to the correct values. The measurement findings after the distortion removal were compared to our gold standards (results of panoramic imaging or measurements from extracted teeth) and showed the accuracy of 96.45% through in vivo examinations and 96.0% through in vitro tests. The proposed distortion removal method is perfectly able to identify the possible inaccurate geometry during image acquisition and is capable of applying the inverse transform to the distorted radiograph to obtain the correctly modified image. This can be really helpful in applications like root canal therapy, implant surgical procedures and digital subtraction radiography, which are essentially dependent on precise measurements. PMID:23724372
Multimodal biometric method that combines veins, prints, and shape of a finger
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo
2011-01-01
Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.
Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J.; Tabirca, Sabin; O’Driscoll, Aoife; Corrigan, Mark
2016-01-01
Background Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. Methods An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. Results A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). Conclusions An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment. PMID:28293602
O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark
2016-01-01
Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.
NASA Astrophysics Data System (ADS)
Escobar-Palafox, Gustavo; Gault, Rosemary; Ridgway, Keith
2011-12-01
Shaped Metal Deposition (SMD) is an additive manufacturing process which creates parts layer by layer by weld depositions. In this work, empirical models that predict part geometry (wall thickness and outer diameter) and some metallurgical aspects (i.e. surface texture, portion of finer Widmanstätten microstructure) for the SMD process were developed. The models are based on an orthogonal fractional factorial design of experiments with four factors at two levels. The factors considered were energy level (a relationship between heat source power and the rate of raw material input.), step size, programmed diameter and travel speed. The models were validated using previous builds; the prediction error for part geometry was under 11%. Several relationships between the factors and responses were identified. Current had a significant effect on wall thickness; thickness increases with increasing current. Programmed diameter had a significant effect on percentage of shrinkage; this decreased with increasing component size. Surface finish decreased with decreasing step size and current.
Method and system for simulating heat and mass transfer in cooling towers
Bharathan, Desikan; Hassani, A. Vahab
1997-01-01
The present invention is a system and method for simulating the performance of a cooling tower. More precisely, the simulator of the present invention predicts values related to the heat and mass transfer from a liquid (e.g., water) to a gas (e.g., air) when provided with input data related to a cooling tower design. In particular, the simulator accepts input data regarding: (a) cooling tower site environmental characteristics; (b) cooling tower operational characteristics; and (c) geometric characteristics of the packing used to increase the surface area within the cooling tower upon which the heat and mass transfer interactions occur. In providing such performance predictions, the simulator performs computations related to the physics of heat and mass transfer within the packing. Thus, instead of relying solely on trial and error wherein various packing geometries are tested during construction of the cooling tower, the packing geometries for a proposed cooling tower can be simulated for use in selecting a desired packing geometry for the cooling tower.
A novel scatter separation method for multi-energy x-ray imaging
NASA Astrophysics Data System (ADS)
Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.
2016-06-01
X-ray imaging coupled with recently emerged energy-resolved photon counting detectors provides the ability to differentiate material components and to estimate their respective thicknesses. However, such techniques require highly accurate images. The presence of scattered radiation leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in computed tomography (CT). The aim of the present study was to introduce and evaluate a partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. This evaluation was carried out with the aid of numerical simulations provided by an internal simulation tool, Sindbad-SFFD. A simplified numerical thorax phantom placed in a CT geometry was used. The attenuation images and CT slices obtained from corrected data showed a remarkable increase in local contrast and internal structure detectability when compared to uncorrected images. Scatter induced bias was also substantially decreased. In terms of quantitative performance, the developed approach proved to be quite accurate as well. The average normalized root-mean-square error between the uncorrected projections and the reference primary projections was around 23%. The application of PASSSA reduced this error to around 5%. Finally, in terms of voxel value accuracy, an increase by a factor >10 was observed for most inspected volumes-of-interest, when comparing the corrected and uncorrected total volumes.
Evaluating a medical error taxonomy.
Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie
2002-01-01
Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.
NASA Technical Reports Server (NTRS)
Fisher, Brad; Wolff, David B.
2010-01-01
Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.
Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A
2015-11-05
Soft tissue artifact (STA) distort marker-based knee kinematics measures and make them difficult to use in clinical practice. None of the current methods designed to compensate for STA is suitable, but multi-body optimization (MBO) has demonstrated encouraging results and can be improved. The goal of this study was to develop and validate the performance of knee joint models, with anatomical and subject-specific kinematic constraints, used in MBO to reduce STA errors. Twenty subjects were recruited: 10 healthy and 10 osteoarthritis (OA) subjects. Subject-specific knee joint models were evaluated by comparing dynamic knee kinematics recorded by a motion capture system (KneeKG™) and optimized with MBO to quasi-static knee kinematics measured by a low-dose, upright, biplanar radiographic imaging system (EOS(®)). Errors due to STA ranged from 1.6° to 22.4° for knee rotations and from 0.8 mm to 14.9 mm for knee displacements in healthy and OA subjects. Subject-specific knee joint models were most effective in compensating for STA in terms of abduction-adduction, inter-external rotation and antero-posterior displacement. Root mean square errors with subject-specific knee joint models ranged from 2.2±1.2° to 6.0±3.9° for knee rotations and from 2.4±1.1 mm to 4.3±2.4 mm for knee displacements in healthy and OA subjects, respectively. Our study shows that MBO can be improved with subject-specific knee joint models, and that the quality of the motion capture calibration is critical. Future investigations should focus on more refined knee joint models to reproduce specific OA knee geometry and physiology. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Martin, J. M. L.; Lee, Timothy J.
1993-01-01
The protonation of N2O and the intramolecular proton transfer in N2OH(+) are studied using various basis sets and a variety of methods, including second-order many-body perturbation theory (MP2), singles and doubles coupled cluster (CCSD), the augmented coupled cluster (CCSD/T/), and complete active space self-consistent field (CASSCF) methods. For geometries, MP2 leads to serious errors even for HNNO(+); for the transition state, only CCSD/T/ produces a reliable geometry due to serious nondynamical correlation effects. The proton affinity at 298.15 K is estimated at 137.6 kcal/mol, in close agreement with recent experimental determinations of 137.3 +/- 1 kcal/mol.
NASA Astrophysics Data System (ADS)
Andreeva, T.; Bräuer, T.; Bykov, V.; Egorov, K.; Endler, M.; Fellinger, J.; Kißlinger, J.; Köppen, M.; Schauer, F.
2015-06-01
Wendelstein 7-X, currently under commissioning at the Max-Planck-Institut für Plasmaphysik in Greifswald, Germany, is a modular advanced stellarator, combining the modular coil concept with optimized properties of the plasma. Most of the envisaged magnetic configurations of the machine are rather sensitive to symmetry breaking perturbations which are the consequence of unavoidable manufacturing and assembly tolerances. This overview describes the successive tracking of the Wendelstein 7-X magnet system geometry starting from the manufacturing of the winding packs up to the modelling of the influence of operation loads. The deviations found were used to calculate the resulting error fields and to compare them with the compensation capacity of the trim coils.
A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.
Liu, Shuo; Zhang, Lei; Li, Jian
2016-11-24
The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.
A novel ULA-based geometry for improving AOA estimation
NASA Astrophysics Data System (ADS)
Shirvani-Moghaddam, Shahriar; Akbari, Farida
2011-12-01
Due to relatively simple implementation, Uniform Linear Array (ULA) is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA) estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA) configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE) of the detected angles descends as the input Signal to Noise Ratio (SNR) increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.
NASA Technical Reports Server (NTRS)
Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.
1984-01-01
Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.
Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error
NASA Technical Reports Server (NTRS)
Byrne, M. D.; Kirlik, Alex
2003-01-01
We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.
Stress Recovery and Error Estimation for Shell Structures
NASA Technical Reports Server (NTRS)
Yazdani, A. A.; Riggs, H. R.; Tessler, A.
2000-01-01
The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.
Variational bounds on the temperature distribution
NASA Astrophysics Data System (ADS)
Kalikstein, Kalman; Spruch, Larry; Baider, Alberto
1984-02-01
Upper and lower stationary or variational bounds are obtained for functions which satisfy parabolic linear differential equations. (The error in the bound, that is, the difference between the bound on the function and the function itself, is of second order in the error in the input function, and the error is of known sign.) The method is applicable to a range of functions associated with equalization processes, including heat conduction, mass diffusion, electric conduction, fluid friction, the slowing down of neutrons, and certain limiting forms of the random walk problem, under conditions which are not unduly restrictive: in heat conduction, for example, we do not allow the thermal coefficients or the boundary conditions to depend upon the temperature, but the thermal coefficients can be functions of space and time and the geometry is unrestricted. The variational bounds follow from a maximum principle obeyed by the solutions of these equations.
Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M
2002-01-01
Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.
Salmingo, Remel A; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu
2012-05-01
Scoliosis is defined as a spinal pathology characterized as a three-dimensional deformity of the spine combined with vertebral rotation. Treatment for severe scoliosis is achieved when the scoliotic spine is surgically corrected and fixed using implanted rods and screws. Several studies performed biomechanical modeling and corrective forces measurements of scoliosis correction. These studies were able to predict the clinical outcome and measured the corrective forces acting on screws, however, they were not able to measure the intraoperative three-dimensional geometry of the spinal rod. In effect, the results of biomechanical modeling might not be so realistic and the corrective forces during the surgical correction procedure were intra-operatively difficult to measure. Projective geometry has been shown to be successful in the reconstruction of a three-dimensional structure using a series of images obtained from different views. In this study, we propose a new method to measure the three-dimensional geometry of an implant rod using two cameras. The reconstruction method requires only a few parameters, the included angle θ between the two cameras, the actual length of the rod in mm, and the location of points for curve fitting. The implant rod utilized in spine surgery was used to evaluate the accuracy of the current method. The three-dimensional geometry of the rod was measured from the image obtained by a scanner and compared to the proposed method using two cameras. The mean error in the reconstruction measurements ranged from 0.32 to 0.45 mm. The method presented here demonstrated the possibility of intra-operatively measuring the three-dimensional geometry of spinal rod. The proposed method could be used in surgical procedures to better understand the biomechanics of scoliosis correction through real-time measurement of three-dimensional implant rod geometry in vivo.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
NASA Astrophysics Data System (ADS)
Walsh, Braden; Jolly, Arthur; Procter, Jonathan
2017-04-01
Using active seismic sources on Tongariro Volcano, New Zealand, the amplitude source location (ASL) method is calibrated and optimized through a series of sensitivity tests. By applying a geologic medium velocity of 1500 m/s and an attenuation value of Q=60 for surface waves along with amplification factors computed from regional earthquakes, the ASL produced location discrepancies larger than 1.0 km horizontally and up to 0.5 km in depth. Through the use of sensitivity tests on input parameters, we show that velocity and attenuation models have moderate to strong influences on the location results, but can be easily constrained. Changes in locations are accommodated through either lateral or depth movements. Station corrections (amplification factors) and station geometry strongly affect the ASL locations laterally, horizontally and in depth. Calibrating the amplification factors through the exploitation of the active seismic source events reduced location errors for the sources by up to 50%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi
2015-08-24
This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solversmore » that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.« less
Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi
2015-09-02
This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared tomore » finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.« less
Minimizing hot spot temperature in asymmetric gradient coil design.
While, Peter T; Forbes, Larry K; Crozier, Stuart
2011-08-01
Heating caused by gradient coils is a considerable concern in the operation of magnetic resonance imaging (MRI) scanners. Hot spots can occur in regions where the gradient coil windings are closely spaced. These problem areas are particularly common in the design of gradient coils with asymmetrically located target regions. In this paper, an extension of an existing coil design method is described, to enable the design of asymmetric gradient coils with reduced hot spot temperatures. An improved model is presented for predicting steady-state spatial temperature distributions for gradient coils. A great amount of flexibility is afforded by this model to consider a wide range of geometries and system material properties. A feature of the temperature distribution related to the temperature gradient is used in a relaxed fixed point iteration routine for successively altering coil windings to have a lower hot spot temperature. Results show that significant reductions in peak temperature are possible at little or no cost to coil performance when compared to minimum power coils of equivalent field error.
Image edge detection based tool condition monitoring with morphological component analysis.
Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng
2017-07-01
The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Research on the optimal structure configuration of dither RLG used in skewed redundant INS
NASA Astrophysics Data System (ADS)
Gao, Chunfeng; Wang, Qi; Wei, Guo; Long, Xingwu
2016-05-01
The actual combat effectiveness of weapon equipment is restricted by the performance of Inertial Navigation System (INS), especially in high reliability required situations such as fighter, satellite and submarine. Through the use of skewed sensor geometries, redundant technique has been applied to reduce the cost and improve the reliability of the INS. In this paper, the structure configuration and the inertial sensor characteristics of Skewed Redundant Strapdown Inertial Navigation System (SRSINS) using dithered Ring Laser Gyroscope (RLG) are analyzed. For the dither coupling effects of the dither gyro, the system measurement errors can be amplified either the individual gyro dither frequency is near one another or the structure of the SRSINS is unreasonable. Based on the characteristics of RLG, the research on coupled vibration of dithered RLG in SRSINS is carried out. On the principle of optimal navigation performance, optimal reliability and optimal cost-effectiveness, the comprehensive evaluation scheme of the inertial sensor configuration of SRINS is given.
Secondary and compound concentrators for parabolic dish solar thermal power systems
NASA Technical Reports Server (NTRS)
Jaffe, L. D.; Poon, P. T.
1981-01-01
A secondary optical element may be added to a parabolic dish solar concentrator to increase the geometric concentration ratio attainable at a given intercept factor. This secondary may be a Fresnel lens or a mirror, such as a compound elliptic concentrator or a hyperbolic trumpet. At a fixed intercept factor, higher overall geometric concentration may be obtainable with a long focal length primary and a suitable secondary matched to it. Use of a secondary to increase the geometric concentration ratio is more likely to e worthwhile if the receiver temperature is high and if errors in the primary are large. Folding the optical path with a secondary may reduce cost by locating the receiver and power conversion equipment closer to the ground and by eliminating the heavy structure needed to support this equipment at the primary focus. Promising folded-path configurations include the Ritchey-Chretien and perhaps some three element geometries. Folding the optical path may be most useful in systems that provide process heat.
Multi-beam transmitter geometries for free-space optical communications
NASA Astrophysics Data System (ADS)
Tellez, Jason A.; Schmidt, Jason D.
2010-02-01
Free-space optical communications systems provide the opportunity to take advantage of higher data transfer rates and lower probability of intercept compared to radio-frequency communications. However, propagation through atmospheric turbulence, such as for airborne laser communication over long paths, results in intensity variations at the receiver and a corresponding degradation in bit error rate (BER) performance. Previous literature has shown that two transmitters, when separated sufficiently, can effectively average out the intensity varying effects of the atmospheric turbulence at the receiver. This research explores the impacts of adding more transmitters and the marginal reduction in the probability of signal fades while minimizing the overall transmitter footprint, an important design factor when considering an airborne communications system. Analytical results for the cumulative distribution function are obtained for tilt-only results, while wave-optics simulations are used to simulate the effects of scintillation. These models show that the probability of signal fade is reduced as the number of transmitters is increased.
NASA Astrophysics Data System (ADS)
Rojo, Pilar; Royo, Santiago; Caum, Jesus; Ramírez, Jorge; Madariaga, Ines
2015-02-01
Peripheral refraction, the refractive error present outside the main direction of gaze, has lately attracted interest due to its alleged relationship with the progression of myopia. The ray tracing procedures involved in its calculation need to follow an approach different from those used in conventional ophthalmic lens design, where refractive errors are compensated only in the main direction of gaze. We present a methodology for the evaluation of the peripheral refractive error in ophthalmic lenses, adapting the conventional generalized ray tracing approach to the requirements of the evaluation of peripheral refraction. The nodal point of the eye and a retinal conjugate surface will be used to evaluate the three-dimensional distribution of refractive error around the fovea. The proposed approach enables us to calculate the three-dimensional peripheral refraction induced by any ophthalmic lens at any direction of gaze and to personalize the lens design to the requirements of the user. The complete evaluation process for a given user prescribed with a -5.76D ophthalmic lens for foveal vision is detailed, and comparative results obtained when the geometry of the lens is modified and when the central refractive error is over- or undercorrected. The methodology is also applied for an emmetropic eye to show its application for refractive errors other than myopia.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
A line-source method for aligning on-board and other pinhole SPECT systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-12-15
Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less
A line-source method for aligning on-board and other pinhole SPECT systems
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-01-01
Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537
A line-source method for aligning on-board and other pinhole SPECT systems.
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-12-01
In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.
Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument
NASA Astrophysics Data System (ADS)
Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory
2014-10-01
The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.
Resolution requirements for aero-optical simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mani, Ali; Wang Meng; Moin, Parviz
2008-11-10
Analytical criteria are developed to estimate the error of aero-optical computations due to inadequate spatial resolution of refractive index fields in high Reynolds number flow simulations. The unresolved turbulence structures are assumed to be locally isotropic and at low turbulent Mach number. Based on the Kolmogorov spectrum for the unresolved structures, the computational error of the optical path length is estimated and linked to the resulting error in the computed far-field optical irradiance. It is shown that in the high Reynolds number limit, for a given geometry and Mach number, the spatial resolution required to capture aero-optics within a pre-specifiedmore » error margin does not scale with Reynolds number. In typical aero-optical applications this resolution requirement is much lower than the resolution required for direct numerical simulation, and therefore, a typical large-eddy simulation can capture the aero-optical effects. The analysis is extended to complex turbulent flow simulations in which non-uniform grid spacings are used to better resolve the local turbulence structures. As a demonstration, the analysis is used to estimate the error of aero-optical computation for an optical beam passing through turbulent wake of flow over a cylinder.« less
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
A novel portable energy dispersive X-ray fluorescence spectrometer with triaxial geometry
NASA Astrophysics Data System (ADS)
Pessanha, S.; Alves, M.; Sampaio, J. M.; Santos, J. P.; Carvalho, M. L.; Guerra, M.
2017-01-01
The X-ray fluorescence technique is a powerful analytical tool with a broad range of applications such as quality control, environmental contamination by heavy metals, cultural heritage, among others. For the first time, a portable energy dispersive X-ray fluorescence spectrometer was assembled, with orthogonal triaxial geometry between the X-ray tube, the secondary target, the sample and the detector. This geometry reduces the background of the measured spectra by reducing significantly the Bremsstrahlung produced in the tube through polarization in the secondary target and in the sample. Consequently, a practically monochromatic excitation energy is obtained. In this way, a better peak-background ratio is obtained compared to similar devices, improving the detection limits and leading to superior sensitivity. The performance of this setup is compared with the one of a benchtop setup with triaxial geometry and a portable setup with planar geometry. Two case studies are presented concerning the analysis of a 18th century paper document, and the bone remains of an individual buried in the early 19th century.
Numerical preservation of symmetry properties of continuum problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caramana, E.J.; Whalen, P.
1997-12-31
The authors investigate the problem of perfectly preserving a symmetry associated naturally with one coordinate system when calculated in a different coordinate system. This allows a much wider range of problems that may be viewed as perturbations of the given symmetry to be investigated. They study the problem of preserving cylindrical symmetry in two-dimensional cartesian geometry and spherical symmetry in two-dimensional cylindrical geometry. They show that this can be achieved by a simple modification of the gradient operator used to compute the force in a staggered grid Lagrangian hydrodynamics algorithm. In the absence of the supposed symmetry they show thatmore » the new operator produces almost no change in the results because it is always close to the original gradient operator. Their technique this results in a subtle manipulation of the spatial truncation error in favor of the assumed symmetry but only to the extent that it is naturally present in the physical situation. This not only extends the range of previous algorithms and the use of new ones for these studies, but for spherical or cylindrical calculations reduces the sensitivity of the results to grid setup with equal angular zoning that has heretofore been necessary with these problems. Although this work is in two-dimensions, it does point the way to solving this problem in three-dimensions. This is particularly important for the ASCI initiative. The manner in which these results can be extended to three-dimensions will be discussed.« less
Development and evaluation of thermal model reduction algorithms for spacecraft
NASA Astrophysics Data System (ADS)
Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus
2015-05-01
This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.
Adaptive and iterative methods for simulations of nanopores with the PNP-Stokes equations
NASA Astrophysics Data System (ADS)
Mitscha-Baude, Gregor; Buttinger-Kreuzhuber, Andreas; Tulzer, Gerhard; Heitzinger, Clemens
2017-06-01
We present a 3D finite element solver for the nonlinear Poisson-Nernst-Planck (PNP) equations for electrodiffusion, coupled to the Stokes system of fluid dynamics. The model serves as a building block for the simulation of macromolecule dynamics inside nanopore sensors. The source code is released online at http://github.com/mitschabaude/nanopores. We add to existing numerical approaches by deploying goal-oriented adaptive mesh refinement. To reduce the computation overhead of mesh adaptivity, our error estimator uses the much cheaper Poisson-Boltzmann equation as a simplified model, which is justified on heuristic grounds but shown to work well in practice. To address the nonlinearity in the full PNP-Stokes system, three different linearization schemes are proposed and investigated, with two segregated iterative approaches both outperforming a naive application of Newton's method. Numerical experiments are reported on a real-world nanopore sensor geometry. We also investigate two different models for the interaction of target molecules with the nanopore sensor through the PNP-Stokes equations. In one model, the molecule is of finite size and is explicitly built into the geometry; while in the other, the molecule is located at a single point and only modeled implicitly - after solution of the system - which is computationally favorable. We compare the resulting force profiles of the electric and velocity fields acting on the molecule, and conclude that the point-size model fails to capture important physical effects such as the dependence of charge selectivity of the sensor on the molecule radius.
Sentinel-1A - First precise orbit determination results
NASA Astrophysics Data System (ADS)
Peter, H.; Jäggi, A.; Fernández, J.; Escobar, D.; Ayuga, F.; Arnold, D.; Wermuth, M.; Hackel, S.; Otten, M.; Simons, W.; Visser, P.; Hugentobler, U.; Féménias, P.
2017-09-01
Sentinel-1A is the first satellite of the European Copernicus programme. Equipped with a Synthetic Aperture Radar (SAR) instrument the satellite was launched on April 3, 2014. Operational since October 2014 the satellite delivers valuable data for more than two years. The orbit accuracy requirements are given as 5 cm in 3D. In order to fulfill this stringent requirement the precise orbit determination (POD) is based on the dual-frequency GPS observations delivered by an eight-channel GPS receiver. The Copernicus POD (CPOD) Service is in charge of providing the orbital and auxiliary products required by the PDGS (Payload Data Ground Segment). External orbit validation is regularly performed by comparing the CPOD Service orbits to orbit solutions provided by POD expert members of the Copernicus POD Quality Working Group (QWG). The orbit comparisons revealed systematic orbit offsets mainly in radial direction (approx. 3 cm). Although no independent observation technique (e.g. DORIS, SLR) is available to validate the GPS-derived orbit solutions, comparisons between the different antenna phase center variations and different reduced-dynamic orbit determination approaches used in the various software packages helped to detect the cause of the systematic offset. An error in the given geometry information about the satellite has been found. After correction of the geometry the orbit validation shows a significant reduction of the radial offset to below 5 mm. The 5 cm orbit accuracy requirement in 3D is fulfilled according to the results of the orbit comparisons between the different orbit solutions from the QWG.
Sellers, Benjamin D; James, Natalie C; Gobbi, Alberto
2017-06-26
Reducing internal strain energy in small molecules is critical for designing potent drugs. Quantum mechanical (QM) and molecular mechanical (MM) methods are often used to estimate these energies. In an effort to determine which methods offer an optimal balance in accuracy and performance, we have carried out torsion scan analyses on 62 fragments. We compared nine QM and four MM methods to reference energies calculated at a higher level of theory: CCSD(T)/CBS single point energies (coupled cluster with single, double, and perturbative triple excitations at the complete basis set limit) calculated on optimized geometries using MP2/6-311+G**. The results show that both the more recent MP2.X perturbation method as well as MP2/CBS perform quite well. In addition, combining a Hartree-Fock geometry optimization with a MP2/CBS single point energy calculation offers a fast and accurate compromise when dispersion is not a key energy component. Among MM methods, the OPLS3 force field accurately reproduces CCSD(T)/CBS torsion energies on more test cases than the MMFF94s or Amber12:EHT force fields, which struggle with aryl-amide and aryl-aryl torsions. Using experimental conformations from the Cambridge Structural Database, we highlight three example structures for which OPLS3 significantly overestimates the strain. The energies and conformations presented should enable scientists to estimate the expected error for the methods described and we hope will spur further research into QM and MM methods.
Raw data normalization for a multi source inverse geometry CT system
Baek, Jongduk; De Man, Bruno; Harrison, Daniel; Pelc, Norbert J.
2015-01-01
A multi-source inverse-geometry CT (MS-IGCT) system consists of a small 2D detector array and multiple x-ray sources. During data acquisition, each source is activated sequentially, and may have random source intensity fluctuations relative to their respective nominal intensity. While a conventional 3rd generation CT system uses a reference channel to monitor the source intensity fluctuation, the MS-IGCT system source illuminates a small portion of the entire field-of-view (FOV). Therefore, it is difficult for all sources to illuminate the reference channel and the projection data computed by standard normalization using flat field data of each source contains error and can cause significant artifacts. In this work, we present a raw data normalization algorithm to reduce the image artifacts caused by source intensity fluctuation. The proposed method was tested using computer simulations with a uniform water phantom and a Shepp-Logan phantom, and experimental data of an ice-filled PMMA phantom and a rabbit. The effect on image resolution and robustness of the noise were tested using MTF and standard deviation of the reconstructed noise image. With the intensity fluctuation and no correction, reconstructed images from simulation and experimental data show high frequency artifacts and ring artifacts which are removed effectively using the proposed method. It is also observed that the proposed method does not degrade the image resolution and is very robust to the presence of noise. PMID:25837090
Dynamic Tasking of Networked Sensors Using Covariance Information
2010-09-01
has been created under an effort called TASMAN (Tasking Autonomous Sensors in a Multiple Application Network). One of the first studies utilizing this...environment was focused on a novel resource management approach, namely covariance-based tasking. Under this scheme, the state error covariance of...resident space objects (RSO), sensor characteristics, and sensor- target geometry were used to determine the effectiveness of future observations in
A Galerkin approximation for linear elastic shallow shells
NASA Astrophysics Data System (ADS)
Figueiredo, I. N.; Trabucho, L.
1992-03-01
This work is a generalization to shallow shell models of previous results for plates by B. Miara (1989). Using the same basis functions as in the plate case, we construct a Galerkin approximation of the three-dimensional linearized elasticity problem, and establish some error estimates as a function of the thickness, the curvature, the geometry of the shell, the forces and the Lamé costants.
NASA Astrophysics Data System (ADS)
Martinsson, J.
2013-03-01
We propose methods for robust Bayesian inference of the hypocentre in presence of poor, inconsistent and insufficient phase arrival times. The objectives are to increase the robustness, the accuracy and the precision by introducing heavy-tailed distributions and an informative prior distribution of the seismicity. The effects of the proposed distributions are studied under real measurement conditions in two underground mine networks and validated using 53 blasts with known hypocentres. To increase the robustness against poor, inconsistent or insufficient arrivals, a Gaussian Mixture Model is used as a hypocentre prior distribution to describe the seismically active areas, where the parameters are estimated based on previously located events in the region. The prior is truncated to constrain the solution to valid geometries, for example below the ground surface, excluding known cavities, voids and fractured zones. To reduce the sensitivity to outliers, different heavy-tailed distributions are evaluated to model the likelihood distribution of the arrivals given the hypocentre and the origin time. Among these distributions, the multivariate t-distribution is shown to produce the overall best performance, where the tail-mass adapts to the observed data. Hypocentre and uncertainty region estimates are based on simulations from the posterior distribution using Markov Chain Monte Carlo techniques. Velocity graphs (equivalent to traveltime graphs) are estimated using blasts from known locations, and applied to reduce the main uncertainties and thereby the final estimation error. To focus on the behaviour and the performance of the proposed distributions, a basic single-event Bayesian procedure is considered in this study for clarity. Estimation results are shown with different distributions, with and without prior distribution of seismicity, with wrong prior distribution, with and without error compensation, with and without error description, with insufficient arrival times and in presence of significant outliers. A particular focus is on visual results and comparisons to give a better understanding of the Bayesian advantage and to show the effects of heavy-tailed distributions and informative prior information on real data.
Geometric validation of MV topograms for patient localization on TomoTherapy
NASA Astrophysics Data System (ADS)
Blanco Kiely, Janid P.; White, Benjamin M.; Low, Daniel A.; Qi, Sharon X.
2016-01-01
Our goal was to geometrically validate the use of mega-voltage orthogonal scout images (MV topograms) as a fast and low-dose alternative to mega-voltage computed tomography (MVCT) for daily patient localization on the TomoTherapy system. To achieve this, anthropomorphic head and pelvis phantoms were imaged on a 16-slice kilo-voltage computed tomography (kVCT) scanner to synthesize kilo-voltage digitally reconstructed topograms (kV-DRT) in the Tomotherapy detector geometry. MV topograms were generated for couch speeds of 1-4 cm s-1 in 1 cm s-1 increments with static gantry angles in the anterior-posterior and left-lateral directions. Phantoms were rigidly translated in the anterior-posterior (AP), superior-inferior (SI), and lateral (LAT) directions to simulate potential setup errors. Image quality improvement was demonstrated by estimating the noise level in the unenhanced and enhanced MV topograms using a principle component analysis-based noise level estimation algorithm. Average noise levels for the head phantom were reduced by 2.53 HU (AP) and 0.18 HU (LAT). The pelvis phantom exhibited average noise level reduction of 1.98 HU (AP) and 0.48 HU (LAT). Mattes Mutual Information rigid registration was used to register enhanced MV topograms with corresponding kV-DRT. Registration results were compared to the known rigid displacements, which assessed the MV topogram localization’s sensitivity to daily positioning errors. Reduced noise levels in the MV topograms enhanced the registration results so that registration errors were <1 mm. The unenhanced head MV topograms had discrepancies <2.1 mm and the pelvis topograms had discrepancies <2.7 mm. Result were found to be consistent regardless of couch speed. In total, 64.7% of the head phantom MV topograms and 60.0% of the pelvis phantom MV topograms exactly measured the phantom offsets. These consistencies demonstrated the potential for daily patient positioning using MV topogram pairs in the context bony-anatomy based procedures such as total marrow irradiation, total body irradiation, and cranial spinal irradiation.
Reducing the Familiarity of Conjunction Lures with Pictures
ERIC Educational Resources Information Center
Lloyd, Marianne E.
2013-01-01
Four experiments were conducted to test whether conjunction errors were reduced after pictorial encoding and whether the semantic overlap between study and conjunction items would impact error rates. Across 4 experiments, compound words studied with a single-picture had lower conjunction error rates during a recognition test than those words…
Recommendations to Improve the Accuracy of Estimates of Physical Activity Derived from Self Report
Ainsworth, Barbara E; Caspersen, Carl J; Matthews, Charles E; Mâsse, Louise C; Baranowski, Tom; Zhu, Weimo
2013-01-01
Context Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. Objective To provide recommendations to improve the accuracy of physical activity derived from self report. Process We provide an overview of presentations and a compilation of perspectives shared by the authors of this paper and workgroup members. Findings We identified a conceptual framework for reducing errors using physical activity self-report questionnaires. The framework identifies six steps to reduce error: (1) identifying the need to measure physical activity, (2) selecting an instrument, (3) collecting data, (4) analyzing data, (5) developing a summary score, and (6) interpreting data. Underlying the first four steps are behavioral parameters of type, intensity, frequency, and duration of physical activities performed, activity domains, and the location where activities are performed. We identified ways to reduce measurement error at each step and made recommendations for practitioners, researchers, and organizational units to reduce error in questionnaire assessment of physical activity. Conclusions Self-report measures of physical activity have a prominent role in research and practice settings. Measurement error can be reduced by applying the framework discussed in this paper. PMID:22287451
Development of a Model Based Technique for Gear Diagnostics using the Wigner-Ville method
NASA Technical Reports Server (NTRS)
Choy, F.; Xu, A.; Polyshchuk, V.
1997-01-01
Imperfections in gear tooth geometry often result from errors in the manufacturing process or excessive material wear during operation. Such faults in the gear tooth geometry can result in large vibrations in the transmission system, and, in some cases, may lead to early failure of the gear transmission system. This report presents the study of the effects of imperfection in gear tooth geometry on the dynamic characteristics of a gear transmission system. The faults in the gear tooth geometry are modeled numerically as the deviation of the tooth profile from its original involute geometry. The changes in gear mesh stiffness due to various profile and pattern variations are evaluated numerically. The resulting changes in the mesh stiffness are incorporated into a computer code to simulate the dynamics of the gear transmission system. A parametric study is performed to examine the sensitivity of gear tooth geometry imperfections on the vibration of a gear transmission system. The parameters variations in this study consist of the magnitude of the imperfection, the pattern of the profile variation, and the total number of teeth affected. Numerical results from the dynamic simulations are examined in both the time and the frequency domains. A joint time-frequency analysis procedure using the Wigner-Ville Distribution is also introduced to identify the location of the damaged tooth from the vibration signature. Numerical simulations of the system dynamics with gear faults were compared to experimental results. An optimal tracker was introduced to quantify the level of damage in the gear mesh system. Conclusions are drawn from the results of this numerical study.
Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.
2017-01-01
We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis. PMID:22528468
Minimizing finite-volume discretization errors on polyhedral meshes
NASA Astrophysics Data System (ADS)
Mouly, Quentin; Evrard, Fabien; van Wachem, Berend; Denner, Fabian
2017-11-01
Tetrahedral meshes are widely used in CFD to simulate flows in and around complex geometries, as automatic generation tools now allow tetrahedral meshes to represent arbitrary domains in a relatively accessible manner. Polyhedral meshes, however, are an increasingly popular alternative. While tetrahedron have at most four neighbours, the higher number of neighbours per polyhedral cell leads to a more accurate evaluation of gradients, essential for the numerical resolution of PDEs. The use of polyhedral meshes, nonetheless, introduces discretization errors for finite-volume methods: skewness and non-orthogonality, which occur with all sorts of unstructured meshes, as well as errors due to non-planar faces, specific to polygonal faces with more than three vertices. Indeed, polyhedral mesh generation algorithms cannot, in general, guarantee to produce meshes free of non-planar faces. The presented work focuses on the quantification and optimization of discretization errors on polyhedral meshes in the context of finite-volume methods. A quasi-Newton method is employed to optimize the relevant mesh quality measures. Various meshes are optimized and CFD results of cases with known solutions are presented to assess the improvements the optimization approach can provide.
Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
The future X-ray observatory missions, such as International X-ray Observatory, require grazing incidence replicated optics of extremely large collecting area (3 m2) in combination with angular resolution of less than 5 arcsec half-power diameter. The resolution of a mirror shell depends ultimately on the quality of the cylindrical mandrels from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation studies have been performed to optimize the operational parameters as well as the polishing lap configuration. Furthermore, depending upon the surface error profile, a model for localized polishing based on dwell time approach is developed. Using the inputs from the mathematical model, a mandrel, having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. We report our first experimental results and discuss plans for further improvements in the polishing process.
Methods and apparatus for reducing peak wind turbine loads
Moroz, Emilian Mieczyslaw
2007-02-13
A method for reducing peak loads of wind turbines in a changing wind environment includes measuring or estimating an instantaneous wind speed and direction at the wind turbine and determining a yaw error of the wind turbine relative to the measured instantaneous wind direction. The method further includes comparing the yaw error to a yaw error trigger that has different values at different wind speeds and shutting down the wind turbine when the yaw error exceeds the yaw error trigger corresponding to the measured or estimated instantaneous wind speed.
Simulation: learning from mistakes while building communication and teamwork.
Kuehster, Christina R; Hall, Carla D
2010-01-01
Medical errors are one of the leading causes of death annually in the United States. Many of these errors are related to poor communication and/or lack of teamwork. Using simulation as a teaching modality provides a dual role in helping to reduce these errors. Thorough integration of clinical practice with teamwork and communication in a safe environment increases the likelihood of reducing the error rates in medicine. By allowing practitioners to make potential errors in a safe environment, such as simulation, these valuable lessons improve retention and will rarely be repeated.
Forward modeling of the Earth's lithospheric field using spherical prisms
NASA Astrophysics Data System (ADS)
Baykiev, Eldar; Ebbing, Jörg; Brönner, Marco; Fabian, Karl
2014-05-01
The ESA satellite mission Swarm consists of three satellites that measure the magnetic field of the Earth at average flight heights of about 450 km and 530 km above surface. Realistic forward modeling of the expected data is an indispensible first step for both, evaluation and inversion of the real data set. This forward modeling requires a precise definition of the spherical geometry of the magnetic sources. At satellite height only long wavelengths of the magnetic anomalies are reliably measured. Because these are very sensitive to the modeling error in case of a local flat Earth approximation, conventional magnetic modeling tools cannot be reliably used. For an improved modeling approach, we start from the existing gravity modeling code "tesseroids" (http://leouieda.github.io/tesseroids/), which calculates gravity gradient tensor components for any collection of spherical prisms (tesseroids). By Poisson's relation the magnetic field is mathematically equivalent to the gradient of a gravity field. It is therefore directly possible to apply "tesseroids" for magnetic field modeling. To this end, the Earth crust is covered by spherical prisms, each with its own prescribed magnetic susceptibility and remanent magnetization. Induced magnetizations are then derived from the products of the local geomagnetic fields for the chosen main field model (such as the International Geomagnetic Reference Field), and the corresponding tesseroid susceptibilities. Remanent magnetization vectors are directly set. This method inherits the functionality of the original "tesseroids" code and performs parallel computation of the magnetic field vector components on any given grid. Initial global calculations for a simplified geometry and piecewise constant magnetization for each tesseroid show that the method is self-consistent and reproduces theoretically expected results. Synthetic induced crustal magnetic fields and total field anomalies of the CRUST1.0 model converted to magnetic tesseroids reproduce the results of previous forward modelling methods (e.g. using point dipoles as magnetic sources), while reducing error terms. Moreover the spherical-prism method can easily be linked to other geophysical forward or inverse modelling tools. Sensitivity analysis over Fennoscandia will be used to estimate if and how induced and remanent magnetization can be distinguished in data from the Swarm satellite mission.
Accommodative Behavior of Young Eyes Wearing Multifocal Contact Lenses.
Altoaimi, Basal H; Almutairi, Meznah S; Kollbaum, Pete S; Bradley, Arthur
2018-05-01
The effectiveness of multifocal contact lenses (MFCLs) at slowing myopia progression may hinge on the accommodative behavior of young eyes fit with these presbyopic style lenses. Can they remove hyperopic defocus? Convergence accommodation as well as pupil size and the zonal geometry are likely to contribute to the final accommodative responses. The aim of this study was to examine the accommodation behavior of young adult eyes wearing MFCLs and the effectiveness of these MFCLs at removing foveal hyperopic defocus when viewing near targets binocularly. Using a high-resolution Shack-Hartmann aberrometer, accommodation and pupil behavior of eight young adults (27.25 ± 2.05 years) were measured while subjects fixated a 20/40 character positioned between 2 m and 20 cm (0.50 to 5.00 diopters [D]) in 0.25-D steps. Refractive states were measured while viewing binocularly and monocularly with single-vision and both center-distance and center-near +2.00 D add MFCLs. Refractive state was defined using three criteria: the dioptric power that would (1) minimize the root mean square wavefront error, (2) focus the pupil center, and (3) provide the peak image quality. Refractive state pupil maps reveal the complex optics that exist in eyes wearing MFCLs. Reduced accommodative gain beyond the far point of the near add revealed that young subjects used the added plus power to help focus near targets. During accommodation to stimuli closer than the far point generated by the add power, a midperipheral region of the pupil was approximately focused, resulting in the smallest accommodative errors for the minimum root mean square-defined measures of refractive state. Paraxial images were always hyperopically or myopically defocused in eyes viewing binocularly with center-distance or center-near MFCLs, respectively. Because of zone geometry in the concentric MFCLs tested, the highly aberrated transition zone between the distance and near optics contributed a significant proportion and sometimes the majority of light to the resulting images. Young eyes fit with MFCLs containing significant transition zones accommodated to focus pupil regions between the near and distance optics, which resulted in less than optimal retinal image quality and myopic or hyperopic defocus in either the pupil center or pupil margins.
Commentary: Reducing diagnostic errors: another role for checklists?
Winters, Bradford D; Aswani, Monica S; Pronovost, Peter J
2011-03-01
Diagnostic errors are a widespread problem, although the true magnitude is unknown because they cannot currently be measured validly. These errors have received relatively little attention despite alarming estimates of associated harm and death. One promising intervention to reduce preventable harm is the checklist. This intervention has proven successful in aviation, in which situations are linear and deterministic (one alarm goes off and a checklist guides the flight crew to evaluate the cause). In health care, problems are multifactorial and complex. A checklist has been used to reduce central-line-associated bloodstream infections in intensive care units. Nevertheless, this checklist was incorporated in a culture-based safety program that engaged and changed behaviors and used robust measurement of infections to evaluate progress. In this issue, Ely and colleagues describe how three checklists could reduce the cognitive biases and mental shortcuts that underlie diagnostic errors, but point out that these tools still need to be tested. To be effective, they must reduce diagnostic errors (efficacy) and be routinely used in practice (effectiveness). Such tools must intuitively support how the human brain works, and under time pressures, clinicians rarely think in conditional probabilities when making decisions. To move forward, it is necessary to accurately measure diagnostic errors (which could come from mapping out the diagnostic process as the medication process has done and measuring errors at each step) and pilot test interventions such as these checklists to determine whether they work.
NASA Astrophysics Data System (ADS)
Hill, Peter; Shanahan, Brendan; Dudson, Ben
2017-04-01
We present a technique for handling Dirichlet boundary conditions with the Flux Coordinate Independent (FCI) parallel derivative operator with arbitrary-shaped material geometry in general 3D magnetic fields. The FCI method constructs a finite difference scheme for ∇∥ by following field lines between poloidal planes and interpolating within planes. Doing so removes the need for field-aligned coordinate systems that suffer from singularities in the metric tensor at null points in the magnetic field (or equivalently, when q → ∞). One cost of this method is that as the field lines are not on the mesh, they may leave the domain at any point between neighbouring planes, complicating the application of boundary conditions. The Leg Value Fill (LVF) boundary condition scheme presented here involves an extrapolation/interpolation of the boundary value onto the field line end point. The usual finite difference scheme can then be used unmodified. We implement the LVF scheme in BOUT++ and use the Method of Manufactured Solutions to verify the implementation in a rectangular domain, and show that it does not modify the error scaling of the finite difference scheme. The use of LVF for arbitrary wall geometry is outlined. We also demonstrate the feasibility of using the FCI approach in no n-axisymmetric configurations for a simple diffusion model in a "straight stellarator" magnetic field. A Gaussian blob diffuses along the field lines, tracing out flux surfaces. Dirichlet boundary conditions impose a last closed flux surface (LCFS) that confines the density. Including a poloidal limiter moves the LCFS to a smaller radius. The expected scaling of the numerical perpendicular diffusion, which is a consequence of the FCI method, in stellarator-like geometry is recovered. A novel technique for increasing the parallel resolution during post-processing, in order to reduce artefacts in visualisations, is described.
Using Automated Writing Evaluation to Reduce Grammar Errors in Writing
ERIC Educational Resources Information Center
Liao, Hui-Chuan
2016-01-01
Despite the recent development of automated writing evaluation (AWE) technology and the growing interest in applying this technology to language classrooms, few studies have looked at the effects of using AWE on reducing grammatical errors in L2 writing. This study identified the primary English grammatical error types made by 66 Taiwanese…
Using Six Sigma to reduce medication errors in a home-delivery pharmacy service.
Castle, Lon; Franzblau-Isaac, Ellen; Paulsen, Jim
2005-06-01
Medco Health Solutions, Inc. conducted a project to reduce medication errors in its home-delivery service, which is composed of eight prescription-processing pharmacies, three dispensing pharmacies, and six call-center pharmacies. Medco uses the Six Sigma methodology to reduce process variation, establish procedures to monitor the effectiveness of medication safety programs, and determine when these efforts do not achieve performance goals. A team reviewed the processes in home-delivery pharmacy and suggested strategies to improve the data-collection and medication-dispensing practices. A variety of improvement activities were implemented, including a procedure for developing, reviewing, and enhancing sound-alike/look-alike (SALA) alerts and system enhancements to improve processing consistency across the pharmacies. "External nonconformances" were reduced for several categories of medication errors, including wrong-drug selection (33%), wrong directions (49%), and SALA errors (69%). Control charts demonstrated evidence of sustained process improvement and actual reduction in specific medication error elements. Establishing a continuous quality improvement process to ensure that medication errors are minimized is critical to any health care organization providing medication services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD{sub B}). The purpose of this study is to extend the capability of the Nth-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different typesmore » of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD{sub B} in the brain layer with a step decrement of 10% while maintaining αD{sub B} values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order (N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The Nth-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.« less
Shang, Yu; Yu, Guoqiang
2014-09-29
Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a N th-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD B ). The purpose of this study is to extend the capability of the N th-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD B in the brain layer with a step decrement of 10% while maintaining αD B values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order ( N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The N th-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.
Differential Geometry Applied To Least-Square Error Surface Approximations
NASA Astrophysics Data System (ADS)
Bolle, Ruud M.; Sabbah, Daniel
1987-08-01
This paper focuses on extraction of the parameters of individual surfaces from noisy depth maps. The basis for this are least-square error polynomial approximations to the range data and the curvature properties that can be computed from these approximations. The curvature properties are derived using the invariants of the Weingarten Map evaluated at the origin of local coordinate systems centered at the range points. The Weingarten Map is a well-known concept in differential geometry; a brief treatment of the differential geometry pertinent to surface curvature is given. We use the curvature properties for extracting certain surface parameters from the curvature properties of the approximations. Then we show that curvature properties alone are not enough to obtain all the parameters of the surfaces; higher order properties (information about change of curvature) are needed to obtain full parametric descriptions. This surface parameter estimation problem arises in the design of a vision system to recognize 3D objects whose surfaces are composed of planar patches and patches of quadrics of revolution. (Quadrics of revolution are quadrics that are surfaces of revolution.) A significant portion of man-made objects can be modeled using these surfaces. The actual process of recognition and parameter extraction is framed as a set of stacked parameter space transforms. The transforms are "stacked" in the sense that any one transform computes only a partial geometric description that forms the input to the next transform. Those who are interested in the organization and control of the recognition and parameter recognition process are referred to [Sabbah86], this paper briefly touches upon the organization, but concentrates mainly on geometrical aspects of the parameter extraction.
Reducing patient identification errors related to glucose point-of-care testing.
Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron
2011-01-01
Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.
Reducing patient identification errors related to glucose point-of-care testing
Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron
2011-01-01
Background: Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT. PMID:21633490
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L. F.; He, X. T.; HEDPS, Center for Applied Physics and Technology, Peking University, Beijing 100871
2013-04-15
A weakly nonlinear (WN) model has been developed for the incompressible Rayleigh-Taylor instability (RTI) in cylindrical geometry. The transition from linear to nonlinear growth is analytically investigated via a third-order solutions for the cylindrical RTI initiated by a single-mode velocity perturbation. The third-order solutions can depict the early stage of the interface asymmetry due to the bubble-spike formation, as well as the saturation of the linear (exponential) growth of the fundamental mode. The WN results in planar RTI [Wang et al., Phys. Plasmas 19, 112706 (2012)] are recovered in the limit of high-mode number perturbations. The difference between the WNmore » growth of the RTI in cylindrical geometry and in planar geometry is discussed. It is found that the interface of the inward (outward) development spike/bubble is extruded (stretched) by the additional inertial force in cylindrical geometry compared with that in planar geometry. For interfaces with small density ratios, the inward growth bubble can grow fast than the outward growth spike in cylindrical RTI. Moreover, a reduced formula is proposed to describe the WN growth of the RTI in cylindrical geometry with an acceptable precision, especially for small-amplitude perturbations. Using the reduced formula, the nonlinear saturation amplitude of the fundamental mode and the phases of the Fourier harmonics are studied. Thus, it should be included in applications where converging geometry effects play an important role, such as the supernova explosions and inertial confinement fusion implosions.« less
Remote secure proof of identity using biometrics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, S. K.; Pearson, P.; Strait, R.S.
1997-06-10
Biometric measurements derived from finger- or voiceprints, hand geometry, retinal vessel pattern and iris texture characteristics etc. can be identifiers of individuals. In each case, the measurements can be coded into a statistically unique bit-string for each individual. While in electronic commerce and other electronic transactions the proof of identity of an individual is provided by the use of either public key cryptography or biometric data, more secure applications can be achieved by employing both. However the former requires the use of exact bit patterns. An error correction procedure allows us to successfully combine the use of both to providemore » a general procedure for remote secure proof of identity using a generic biometric device. One such procedure has been demonstrated using a device based on hand geometry.« less
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Nava, Alessandro; Fan, Qi; Fuentes, Alfonso
2002-01-01
New geometry of face worm gear drives with conical and cylindrical worms is proposed. The generation of the face worm-gear is based on application of a tilted head-cutter (grinding tool) instead of application of a hob applied at present. The generation of a conjugated worm is based on application of a tilted head-cutter (grinding tool) as well. The bearing contact of the gear drive is localized and is oriented longitudinally. A predesigned parabolic function of transmission errors for reduction of noise and vibration is provided. The stress analysis of the gear drive is performed using a three-dimensional finite element analysis. The contacting model is automatically generated. The developed theory is illustrated with numerical examples.
NASA Astrophysics Data System (ADS)
Selby, Boris P.; Sakas, Georgios; Walter, Stefan; Stilla, Uwe
2008-03-01
Positioning a patient accurately in treatment devices is crucial for radiological treatment, especially if accuracy vantages of particle beam treatment are exploited. To avoid sub-millimeter misalignments, X-ray images acquired from within the device are compared to a CT to compute respective alignment corrections. Unfortunately, deviations of the underlying geometry model for the imaging system degrade the achievable accuracy. We propose an automatic calibration routine, which bases on the geometry of a phantom and its automatic detection in digital radiographs acquired for various geometric device settings during the calibration. The results from the registration of the phantom's X-ray projections and its known geometry are used to update the model of the respective beamlines, which is used to compute the patient alignment correction. The geometric calibration of a beamline takes all nine relevant degrees of freedom into account, including detector translations in three directions, detector tilt by three axes and three possible translations for the X-ray tube. Introducing a stochastic model for the calibration we are able to predict the patient alignment deviations resulting from inaccuracies inherent to the phantom design and the calibration. Comparisons of the alignment results for a treatment device without calibrated imaging systems and a calibrated device show that an accurate calibration can enhance alignment accuracy.
NASA Astrophysics Data System (ADS)
Diamond, D. H.; Heyns, P. S.; Oberholster, A. J.
2016-12-01
The measurement of instantaneous angular speed is being increasingly investigated for its use in a wide range of condition monitoring and prognostic applications. Central to many measurement techniques are incremental shaft encoders recording the arrival times of shaft angular increments. The conventional approach to processing these signals assumes that the angular increments are equidistant. This assumption is generally incorrect when working with toothed wheels and especially zebra tape encoders and has been shown to introduce errors in the estimated shaft speed. There are some proposed methods in the literature that aim to compensate for this geometric irregularity. Some of the methods require the shaft speed to be perfectly constant for calibration, something rarely achieved in practice. Other methods assume the shaft speed to be nearly constant with minor deviations. Therefore existing methods cannot calibrate the entire shaft encoder geometry for arbitrary shaft speeds. The present article presents a method to calculate the shaft encoder geometry for arbitrary shaft speed profiles. The method uses Bayesian linear regression to calculate the encoder increment distances. The method is derived and then tested against simulated and laboratory experiments. The results indicate that the proposed method is capable of accurately determining the shaft encoder geometry for any shaft speed profile.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
A knowledge-based design framework for airplane conceptual and preliminary design
NASA Astrophysics Data System (ADS)
Anemaat, Wilhelmus A. J.
The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.
Parameter Estimation as a Problem in Statistical Thermodynamics.
Earle, Keith A; Schneider, David J
2011-03-14
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
Zaytsev, Anatoly V.; Grishchuk, Ekaterina L.
2015-01-01
Accuracy of chromosome segregation relies on the ill-understood ability of mitotic kinetochores to biorient, whereupon each sister kinetochore forms microtubule (MT) attachments to only one spindle pole. Because initial MT attachments result from chance encounters with the kinetochores, biorientation must rely on specific mechanisms to avoid and resolve improper attachments. Here we use mathematical modeling to critically analyze the error-correction potential of a simplified biorientation mechanism, which involves the back-to-back arrangement of sister kinetochores and the marked instability of kinetochore–MT attachments. We show that a typical mammalian kinetochore operates in a near-optimal regime, in which the back-to-back kinetochore geometry and the indiscriminate kinetochore–MT turnover provide strong error-correction activity. In human cells, this mechanism alone can potentially enable normal segregation of 45 out of 46 chromosomes during one mitotic division, corresponding to a mis-segregation rate in the range of 10−1–10−2 per chromosome. This theoretical upper limit for chromosome segregation accuracy predicted with the basic mechanism is close to the mis-segregation rate in some cancer cells; however, it cannot explain the relatively low chromosome loss in diploid human cells, consistent with their reliance on additional mechanisms. PMID:26424798
Omang, R.J.; Parrett, Charles; Hull, J.A.
1983-01-01
Equations using channel-geometry measurements were developed for estimating mean runoff and peak flows of ungaged streams in southeastern Montana. Two separate sets of esitmating equations were developed for determining mean annual runoff: one for perennial streams and one for ephemeral and intermittent streams. Data from 29 gaged sites on perennial streams and 21 gaged sites on ephemeral and intermittent streams were used in these analyses. Data from 78 gaged sites were used in the peak-flow analyses. Southeastern Montana was divided into three regions and separate multiple-regression equations for each region were developed that relate channel dimensions to peak discharge having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Channel-geometery relations were developed using measurements of the active-channel width and bankfull width. Active-channel width and bankfull width were the most significant channel features for estimating mean annual runoff for al types of streams. Use of this method requires that onsite measurements be made of channel width. The standard error of estimate for predicting mean annual runoff ranged from about 38 to 79 percent. The standard error of estimate relating active-channel width or bankfull width to peak flow ranged from about 37 to 115 percent. (USGS)
Wu, Wen; Wu, Zhouhu; Song, Zhiwen
2017-07-01
Prediction of the pollutant mixing zone (PMZ) near the discharge outfall in Huangshaxi shows large error when using the methods based on the constant lateral diffusion assumption. The discrepancy is due to the lack of consideration of the diffusion coefficient variation. The variable lateral diffusion coefficient is proposed to be a function of the longitudinal distance from the outfall. Analytical solution of the two-dimensional advection-diffusion equation of a pollutant is derived and discussed. Formulas to characterize the geometry of the PMZ are derived based on this solution, and a standard curve describing the boundary of the PMZ is obtained by proper choices of the normalization scales. The change of PMZ topology due to the variable diffusion coefficient is then discussed using these formulas. The criterion of assuming the lateral diffusion coefficient to be constant without large error in PMZ geometry is found. It is also demonstrated how to use these analytical formulas in the inverse problems including estimating the lateral diffusion coefficient in rivers by convenient measurements, and determining the maximum allowable discharge load based on the limitations of the geometrical scales of the PMZ. Finally, applications of the obtained formulas to onsite PMZ measurements in Huangshaxi present excellent agreement.
An editor for the generation and customization of geometry restraints
Moriarty, Nigel W.; Draizen, Eli J.; Adams, Paul D.
2017-02-01
Chemical restraints for use in macromolecular structure refinement are produced by a variety of methods, including a number of programs that use chemical information to generate the required bond, angle, dihedral, chiral and planar restraints. These programs help to automate the process and therefore minimize the errors that could otherwise occur if it were performed manually. Furthermore, restraint-dictionary generation programs can incorporate chemical and other prior knowledge to provide reasonable choices of types and values. However, the use of restraints to define the geometry of a molecule is an approximation introduced with efficiency in mind. The representation of a bondmore » as a parabolic function is a convenience and does not reflect the true variability in even the simplest of molecules. Another complicating factor is the interplay of the molecule with other parts of the macromolecular model. Finally, difficult situations arise from molecules with rare or unusual moieties that may not have their conformational space fully explored. These factors give rise to the need for an interactive editor for WYSIWYG interactions with the restraints and molecule. Restraints Editor, Especially Ligands (REEL) is a graphical user interface for simple and error-free editing along with additional features to provide greater control of the restraint dictionaries in macromolecular refinement.« less
Curvature correction of retinal OCTs using graph-based geometry detection
NASA Astrophysics Data System (ADS)
Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan
2013-05-01
In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.
An editor for the generation and customization of geometry restraints.
Moriarty, Nigel W; Draizen, Eli J; Adams, Paul D
2017-02-01
Chemical restraints for use in macromolecular structure refinement are produced by a variety of methods, including a number of programs that use chemical information to generate the required bond, angle, dihedral, chiral and planar restraints. These programs help to automate the process and therefore minimize the errors that could otherwise occur if it were performed manually. Furthermore, restraint-dictionary generation programs can incorporate chemical and other prior knowledge to provide reasonable choices of types and values. However, the use of restraints to define the geometry of a molecule is an approximation introduced with efficiency in mind. The representation of a bond as a parabolic function is a convenience and does not reflect the true variability in even the simplest of molecules. Another complicating factor is the interplay of the molecule with other parts of the macromolecular model. Finally, difficult situations arise from molecules with rare or unusual moieties that may not have their conformational space fully explored. These factors give rise to the need for an interactive editor for WYSIWYG interactions with the restraints and molecule. Restraints Editor, Especially Ligands (REEL) is a graphical user interface for simple and error-free editing along with additional features to provide greater control of the restraint dictionaries in macromolecular refinement.
Replication infidelity via a mismatch with Watson–Crick geometry
Bebenek, Katarzyna; Pedersen, Lars C.; Kunkel, Thomas A.
2011-01-01
In describing the DNA double helix, Watson and Crick suggested that “spontaneous mutation may be due to a base occasionally occurring in one of its less likely tautomeric forms.” Indeed, among many mispairing possibilities, either tautomerization or ionization of bases might allow a DNA polymerase to insert a mismatch with correct Watson–Crick geometry. However, despite substantial progress in understanding the structural basis of error prevention during polymerization, no DNA polymerase has yet been shown to form a natural base–base mismatch with Watson–Crick-like geometry. Here we provide such evidence, in the form of a crystal structure of a human DNA polymerase λ variant poised to misinsert dGTP opposite a template T. All atoms needed for catalysis are present at the active site and in positions that overlay with those for a correct base pair. The mismatch has Watson–Crick geometry consistent with a tautomeric or ionized base pair, with the pH dependence of misinsertion consistent with the latter. The results support the original idea that a base substitution can originate from a mismatch having Watson–Crick geometry, and they suggest a common catalytic mechanism for inserting a correct and an incorrect nucleotide. A second structure indicates that after misinsertion, the now primer-terminal G•T mismatch is also poised for catalysis but in the wobble conformation seen in other studies, indicating the dynamic nature of the pathway required to create a mismatch in fully duplex DNA. PMID:21233421
Replication infidelity via a mismatch with Watson-Crick geometry.
Bebenek, Katarzyna; Pedersen, Lars C; Kunkel, Thomas A
2011-02-01
In describing the DNA double helix, Watson and Crick suggested that "spontaneous mutation may be due to a base occasionally occurring in one of its less likely tautomeric forms." Indeed, among many mispairing possibilities, either tautomerization or ionization of bases might allow a DNA polymerase to insert a mismatch with correct Watson-Crick geometry. However, despite substantial progress in understanding the structural basis of error prevention during polymerization, no DNA polymerase has yet been shown to form a natural base-base mismatch with Watson-Crick-like geometry. Here we provide such evidence, in the form of a crystal structure of a human DNA polymerase λ variant poised to misinsert dGTP opposite a template T. All atoms needed for catalysis are present at the active site and in positions that overlay with those for a correct base pair. The mismatch has Watson-Crick geometry consistent with a tautomeric or ionized base pair, with the pH dependence of misinsertion consistent with the latter. The results support the original idea that a base substitution can originate from a mismatch having Watson-Crick geometry, and they suggest a common catalytic mechanism for inserting a correct and an incorrect nucleotide. A second structure indicates that after misinsertion, the now primer-terminal G • T mismatch is also poised for catalysis but in the wobble conformation seen in other studies, indicating the dynamic nature of the pathway required to create a mismatch in fully duplex DNA.
Gkionis, Konstantinos; Kruse, Holger; Šponer, Jiří
2016-04-12
Modern dispersion-corrected DFT methods have made it possible to perform reliable QM studies on complete nucleic acid (NA) building blocks having hundreds of atoms. Such calculations, although still limited to investigations of potential energy surfaces, enhance the portfolio of computational methods applicable to NAs and offer considerably more accurate intrinsic descriptions of NAs than standard MM. However, in practice such calculations are hampered by the use of implicit solvent environments and truncation of the systems. Conventional QM optimizations are spoiled by spurious intramolecular interactions and severe structural deformations. Here we compare two approaches designed to suppress such artifacts: partially restrained continuum solvent QM and explicit solvent QM/MM optimizations. We report geometry relaxations of a set of diverse double-quartet guanine quadruplex (GQ) DNA stems. Both methods provide neat structures without major artifacts. However, each one also has distinct weaknesses. In restrained optimizations, all errors in the target geometries (i.e., low-resolution X-ray and NMR structures) are transferred to the optimized geometries. In QM/MM, the initial solvent configuration causes some heterogeneity in the geometries. Nevertheless, both approaches represent a decisive step forward compared to conventional optimizations. We refine earlier computations that revealed sizable differences in the relative energies of GQ stems computed with AMBER MM and QM. We also explore the dependence of the QM/MM results on the applied computational protocol.
Investigating the geometry of pig airways using computed tomography
NASA Astrophysics Data System (ADS)
Mansy, Hansen A.; Azad, Md Khurshidul; McMurray, Brandon; Henry, Brian; Royston, Thomas J.; Sandler, Richard H.
2015-03-01
Numerical modeling of sound propagation in the airways requires accurate knowledge of the airway geometry. These models are often validated using human and animal experiments. While many studies documented the geometric details of the human airways, information about the geometry of pig airways is scarcer. In addition, the morphology of animal airways can be significantly different from that of humans. The objective of this study is to measure the airway diameter, length and bifurcation angles in domestic pigs using computed tomography. After imaging the lungs of 3 pigs, segmentation software tools were used to extract the geometry of the airway lumen. The airway dimensions were then measured from the resulting 3 D models for the first 10 airway generations. Results showed that the size and morphology of the airways of different animals were similar. The measured airway dimensions were compared with those of the human airways. While the trachea diameter was found to be comparable to the adult human, the diameter, length and branching angles of other airways were noticeably different from that of humans. For example, pigs consistently had an early airway branching from the trachea that feeds the superior (top) right lung lobe proximal to the carina. This branch is absent in the human airways. These results suggested that the human geometry may not be a good approximation of the pig airways and may contribute to increasing the errors when the human airway geometric values are used in computational models of the pig chest.
Transforming BIM to BEM: Generation of Building Geometry for the NASA Ames Sustainability Base BIM
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Donnell, James T.; Maile, Tobias; Rose, Cody
Typical processes of whole Building Energy simulation Model (BEM) generation are subjective, labor intensive, time intensive and error prone. Essentially, these typical processes reproduce already existing data, i.e. building models already created by the architect. Accordingly, Lawrence Berkeley National Laboratory (LBNL) developed a semi-automated process that enables reproducible conversions of Building Information Model (BIM) representations of building geometry into a format required by building energy modeling (BEM) tools. This is a generic process that may be applied to all building energy modeling tools but to date has only been used for EnergyPlus. This report describes and demonstrates each stage inmore » the semi-automated process for building geometry using the recently constructed NASA Ames Sustainability Base throughout. This example uses ArchiCAD (Graphisoft, 2012) as the originating CAD tool and EnergyPlus as the concluding whole building energy simulation tool. It is important to note that the process is also applicable for professionals that use other CAD tools such as Revit (“Revit Architecture,” 2012) and DProfiler (Beck Technology, 2012) and can be extended to provide geometry definitions for BEM tools other than EnergyPlus. Geometry Simplification Tool (GST) was used during the NASA Ames project and was the enabling software that facilitated semi-automated data transformations. GST has now been superseded by Space Boundary Tool (SBT-1) and will be referred to as SBT-1 throughout this report. The benefits of this semi-automated process are fourfold: 1) reduce the amount of time and cost required to develop a whole building energy simulation model, 2) enable rapid generation of design alternatives, 3) improve the accuracy of BEMs and 4) result in significantly better performing buildings with significantly lower energy consumption than those created using the traditional design process, especially if the simulation model was used as a predictive benchmark during operation. Developing BIM based criteria to support the semi-automated process should result in significant reliable improvements and time savings in the development of BEMs. In order to define successful BIMS, CAD export of IFC based BIMs for BEM must adhere to a standard Model View Definition (MVD) for simulation as provided by the concept design BIM MVD (buildingSMART, 2011). In order to ensure wide scale adoption, companies would also need to develop their own material libraries to support automated activities and undertake a pilot project to improve understanding of modeling conventions and design tool features and limitations.« less
Improved arrayed-waveguide-grating layout avoiding systematic phase errors.
Ismail, Nur; Sun, Fei; Sengo, Gabriel; Wörhoff, Kerstin; Driessen, Alfred; de Ridder, René M; Pollnau, Markus
2011-04-25
We present a detailed description of an improved arrayed-waveguide-grating (AWG) layout for both, low and high diffraction orders. The novel layout presents identical bends across the entire array; in this way systematic phase errors arising from different bends that are inherent to conventional AWG designs are completely eliminated. In addition, for high-order AWGs our design results in more than 50% reduction of the occupied area on the wafer. We present an experimental characterization of a low-order device fabricated according to this geometry. The device has a resolution of 5.5 nm, low intrinsic losses (< 2 dB) in the wavelength region of interest for the application, and is polarization insensitive over a wide spectral range of 215 nm.
Contrast enhancement in EIT imaging of the brain.
Nissinen, A; Kaipio, J P; Vauhkonen, M; Kolehmainen, V
2016-01-01
We consider electrical impedance tomography (EIT) imaging of the brain. The brain is surrounded by the poorly conducting skull which has low conductivity compared to the brain. The skull layer causes a partial shielding effect which leads to weak sensitivity for the imaging of the brain tissue. In this paper we propose an approach based on the Bayesian approximation error approach, to enhance the contrast in brain imaging. With this approach, both the (uninteresting) geometry and the conductivity of the skull are embedded in the approximation error statistics, which leads to a computationally efficient algorithm that is able to detect features such as internal haemorrhage with significantly increased sensitivity and specificity. We evaluate the approach with simulations and phantom data.
Fourier mode analysis of slab-geometry transport iterations in spatially periodic media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E; Zika, M
1999-04-01
We describe a Fourier analysis of the diffusion-synthetic acceleration (DSA) and transport-synthetic acceleration (TSA) iteration schemes for a spatially periodic, but otherwise arbitrarily heterogeneous, medium. Both DSA and TSA converge more slowly in a heterogeneous medium than in a homogeneous medium composed of the volume-averaged scattering ratio. In the limit of a homogeneous medium, our heterogeneous analysis contains eigenvalues of multiplicity two at ''resonant'' wave numbers. In the presence of material heterogeneities, error modes corresponding to these resonant wave numbers are ''excited'' more than other error modes. For DSA and TSA, the iteration spectral radius may occur at these resonantmore » wave numbers, in which case the material heterogeneities most strongly affect iterative performance.« less
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
NASA Technical Reports Server (NTRS)
Enomoto, F.; Keller, P.
1984-01-01
The Computer Aided Design (CAD) system's common geometry database was used to generate input for theoretical programs and numerically controlled (NC) tool paths for wind tunnel part fabrication. This eliminates the duplication of work in generating separate geometry databases for each type of analysis. Another advantage is that it reduces the uncertainty due to geometric differences when comparing theoretical aerodynamic data with wind tunnel data. The system was adapted to aerodynamic research by developing programs written in Design Analysis Language (DAL). These programs reduced the amount of time required to construct complex geometries and to generate input for theoretical programs. Certain shortcomings of the Design, Drafting, and Manufacturing (DDM) software limited the effectiveness of these programs and some of the Calma NC software. The complexity of aircraft configurations suggests that more types of surface and curve geometry should be added to the system. Some of these shortcomings may be eliminated as improved versions of DDM are made available.
Use of machine learning methods to reduce predictive error of groundwater models.
Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal
2014-01-01
Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.
SU-G-TeP1-08: LINAC Head Geometry Modeling for Cyber Knife System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, B; Li, Y; Liu, B
Purpose: Knowledge of the LINAC head information is critical for model based dose calculation algorithms. However, the geometries are difficult to measure precisely. The purpose of this study is to develop linac head models for Cyber Knife system (CKS). Methods: For CKS, the commissioning data were measured in water at 800mm SAD. The measured full width at half maximum (FWHM) for each cone was found greater than the nominal value, this was further confirmed by additional film measurement in air. Diameter correction, cone shift and source shift models (DCM, CSM and SSM) are proposed to account for the differences. Inmore » DCM, a cone-specific correction is applied. For CSM and SSM, a single shift is applied to the cone or source physical position. All three models were validated with an in-house developed pencil beam dose calculation algorithm, and further evaluated by the collimator scatter factor (Sc) correction. Results: The mean square error (MSE) between nominal diameter and the FWHM derived from commissioning data and in-air measurement are 0.54mm and 0.44mm, with the discrepancy increasing with cone size. Optimal shift for CSM and SSM is found to be 9mm upward and 18mm downward, respectively. The MSE in FWHM is reduced to 0.04mm and 0.14mm for DCM and CSM (SSM). Both DCM and CSM result in the same set of Sc values. Combining all cones at SAD 600–1000mm, the average deviation from 1 in Sc of DCM (CSM) and SSM is 2.6% and 2.2%, and reduced to 0.9% and 0.7% for the cones with diameter greater than 15mm. Conclusion: We developed three geometrical models for CKS. All models can handle the discrepancy between vendor specifications and commissioning data. And SSM has the best performance for Sc correction. The study also validated that a point source can be used in CKS dose calculation algorithms.« less
Transonic rotor tip design using numerical optimization
NASA Technical Reports Server (NTRS)
Tauber, Michael E.; Langhi, Ronald G.
1985-01-01
The aerodynamic design procedure for a new blade tip suitable for operation at transonic speeds is illustrated. For the first time, 3 dimensional numerical optimization was applied to rotor tip design, using the recent derivative of the ROT22 code, program R22OPT. Program R22OPT utilized an efficient quasi-Newton optimization algorithm. Multiple design objectives were specified. The delocalization of the shock wave was to be eliminated in forward flight for an advance ratio of 0.41 and a tip Mach number of 0.92 at psi = 90 deg. Simultaneously, it was sought to reduce torque requirements while maintaining effective restoring pitching moments. Only the outer 10 percent of the blade span was modified; the blade area was not to be reduced by more than 3 percent. The goal was to combine the advantages of both sweptback and sweptforward blade tips. A planform that featured inboard sweepback was combined with a sweptforward tip and a taper ratio of 0.5. Initially, the ROT22 code was used to find by trial and error a planform geometry which met the design goals. This configuration had an inboard section with a leading edge sweep of 20 deg and a tip section swept forward at 25 deg; in addition, the airfoils were modified.
Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits †
Gámez Serna, Citlalli; Ruichek, Yassine
2017-01-01
A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle’s speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, ‘curve analysis extraction’ and ‘speed limits database creation’ are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger. PMID:28613251
Space Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Rayos, Elonsio M.; Campbell, Charles H.; Rickman, Steven L.; Larsen, Curtis E.
2007-01-01
Complex computer codes are used to estimate thermal and structural reentry loads on the Shuttle Orbiter induced by ice and foam debris impact during ascent. Such debris can create cavities in the Shuttle Thermal Protection System. The sizes and shapes of these cavities are approximated to accommodate a code limitation that requires simple "shoebox" geometries to describe the cavities -- rectangular areas and planar walls that are at constant angles with respect to vertical. These approximations induce uncertainty in the code results. The Modern Design of Experiments (MDOE) has recently been applied to develop a series of resource-minimal computational experiments designed to generate low-order polynomial graduating functions to approximate the more complex underlying codes. These polynomial functions were then used to propagate cavity geometry errors to estimate the uncertainty they induce in the reentry load calculations performed by the underlying code. This paper describes a methodological study focused on evaluating the application of MDOE to future operational codes in a rapid and low-cost way to assess the effects of cavity geometry uncertainty.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less
Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.
2010-01-01
The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566
NASA Technical Reports Server (NTRS)
Schneider, Judy; Nunes, Arthur C., Jr.; Brendel, Michael S.
2010-01-01
Although friction stir welding (FSW) was patented in 1991, process development has been based upon trial and error and the literature still exhibits little understanding of the mechanisms determining weld structure and properties. New concepts emerging from a better understanding of these mechanisms enhance the ability of FSW engineers to think about the FSW process in new ways, inevitably leading to advances in the technology. A kinematic approach in which the FSW flow process is decomposed into several simple flow components has been found to explain the basic structural features of FSW welds and to relate them to tool geometry and process parameters. Using this modelling approach, this study reports on a correlation between the features of the weld nugget, process parameters, weld tool geometry, and weld strength. This correlation presents a way to select process parameters for a given tool geometry so as to optimize weld strength. It also provides clues that may ultimately explain why the weld strength varies within the sample population.
NASA Technical Reports Server (NTRS)
Seyffert, A. S.; Venter, C.; Johnson, T. J.; Harding, A. K.
2012-01-01
Since the launch of the Large Area Telescope (LAT) on board the Fermi spacecraft in June 2008, the number of observed gamma-ray pulsars has increased dramatically. A large number of these are also observed at radio frequencies. Constraints on the viewing geometries of 5 of 6 gamma-ray pulsars exhibiting single-peaked gamma-ray profiles were derived using high-quality radio polarization data [1]. We obtain independent constraints on the viewing geometries of 6 by using a geometric emission code to model the Fermi LAT and radio light curves (LCs). We find fits for the magnetic inclination and observer angles by searching the solution space by eye. Our results are generally consistent with those previously obtained [1], although we do find small differences in some cases. We will indicate how the gamma-ray and radio pulse shapes as well as their relative phase lags lead to constraints in the solution space. Values for the flux correction factor (f(omega)) corresponding to the fits are also derived (with errors).
Errors from approximation of ODE systems with reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevska, Tanya
2016-12-30
This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.
Classification and reduction of pilot error
NASA Technical Reports Server (NTRS)
Rogers, W. H.; Logan, A. L.; Boley, G. D.
1989-01-01
Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.
Brown, Judith A.; Bishop, Joseph E.
2016-07-20
An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less
Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor
NASA Astrophysics Data System (ADS)
Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei
2013-08-01
Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.
Representing Misalignments of the STAR Geometry Model using AgML
NASA Astrophysics Data System (ADS)
Webb, Jason C.; Lauret, Jérôme; Perevotchikov, Victor; Smirnov, Dmitri; Van Buren, Gene
2017-10-01
The STAR Heavy Flavor Tracker (HFT) was designed to provide high-precision tracking for the identification of charmed hadron decays in heavy-ion collisions at RHIC. It consists of three independently mounted subsystems, providing four precision measurements along the track trajectory, with the goal of pointing decay daughters back to vertices displaced by less than 100 microns from the primary event vertex. The ultimate efficiency and resolution of the physics analysis will be driven by the quality of the simulation and reconstruction of events in heavy-ion collisions. In particular, it is important that the geometry model properly accounts for the relative misalignments of the HFT subsystems, along with the alignment of the HFT relative to STARs primary tracking detector, the Time Projection Chamber (TPC). The Abstract Geometry Modeling Language (AgML) provides a single description of the STAR geometry, generating both our simulation (GEANT 3) and reconstruction geometries (ROOT). AgML implements an ideal detector model, while misalignments are stored separately in database tables. These have historically been applied at the hit level. Simulated detector hits are projected from their ideal position along the track’s trajectory, until they intersect the misaligned detector volume, where the struck detector element is calculated for hit digitization. This scheme has worked well as hit errors have been negligible compared with the size of sensitive volumes. The precision and complexity of the HFT detector require us to apply misalignments to the detector volumes themselves. In this paper we summarize the extension of the AgML language and support libraries to enable the static misalignment of our reconstruction and simulation geometries, discussing the design goals, limitations and path to full misalignment support in ROOT/VMC-based simulation.