Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Munoz, Cesar A.
2007-01-01
This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.
Collaborative recall of details of an emotional film.
Wessel, Ineke; Zandstra, Anna Roos E; Hengeveld, Hester M E; Moulds, Michelle L
2015-01-01
Collaborative inhibition refers to the phenomenon that when several people work together to produce a single memory report, they typically produce fewer items than when the unique items in the individual reports of the same number of participants are combined (i.e., nominal recall). Yet, apart from this negative effect, collaboration may be beneficial in that group members remove errors from a collaborative report. Collaborative inhibition studies on memory for emotional stimuli are scarce. Therefore, the present study examined both collaborative inhibition and collaborative error reduction in the recall of the details of emotional material in a laboratory setting. Female undergraduates (n = 111) viewed a film clip of a fatal accident and subsequently engaged in either collaborative (n = 57) or individual recall (n = 54) in groups of three. The results show that, across several detail categories, collaborating groups recalled fewer details than nominal groups. However, overall, nominal recall produced more errors than collaborative recall. The present results extend earlier findings on both collaborative inhibition and error reduction to the recall of affectively laden material. These findings may have implications for the applied fields of forensic and clinical psychology.
Alternative Attitude Commanding and Control for Precise Spacecraft Landing
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2004-01-01
A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.
Real-Time Minimization of Tracking Error for Aircraft Systems
NASA Technical Reports Server (NTRS)
Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John
2013-01-01
This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.
NASA Astrophysics Data System (ADS)
Harmanec, Petr; Prša, Andrej
2011-08-01
The increasing precision of astronomical observations of stars and stellar systems is gradually getting to a level where the use of slightly different values of the solar mass, radius, and luminosity, as well as different values of fundamental physical constants, can lead to measurable systematic differences in the determination of basic physical properties. An equivalent issue with an inconsistent value of the speed of light was resolved by adopting a nominal value that is constant and has no error associated with it. Analogously, we suggest that the systematic error in stellar parameters may be eliminated by (1) replacing the solar radius R⊙ and luminosity L⊙ by the nominal values that are by definition exact and expressed in SI units: and ; (2) computing stellar masses in terms of M⊙ by noting that the measurement error of the product GM⊙ is 5 orders of magnitude smaller than the error in G; (3) computing stellar masses and temperatures in SI units by using the derived values and ; and (4) clearly stating the reference for the values of the fundamental physical constants used. We discuss the need and demonstrate the advantages of such a paradigm shift.
An algorithm for control system design via parameter optimization. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sinha, P. K.
1972-01-01
An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.
Sampling command generator corrects for noise and dropouts in recorded data
NASA Technical Reports Server (NTRS)
Anderson, T. O.
1973-01-01
Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.
NASA Technical Reports Server (NTRS)
Powell, R. W.; Stone, H. W.
1980-01-01
A six-degree-of-freedom simulation analysis was performed for the space shuttle orbiter entry from Mach 10 to Mach 2.5 with realistic off-nominal conditions using the flight control system referred to as the November 1976 Integrated Digital Autopilot. The off-nominal conditions included: (1) aerodynamic uncertainties in extrapolating from wind tunnel of flight characteristics, (2) error in deriving angle of attack from onboard instrumentation, (3) failure of two of the four reaction control-system thrusters on each side (design specification), and (4) lateral center-of-gravity offset. Many combinations of these off-nominal conditions resulted in a loss of the orbiter. Control-system modifications were identified to prevent this possibility.
NASA Astrophysics Data System (ADS)
Li, Shuang; Peng, Yuming
2012-01-01
In order to accurately deliver an entry vehicle through the Martian atmosphere to the prescribed parachute deployment point, active Mars entry guidance is essential. This paper addresses the issue of Mars atmospheric entry guidance using the command generator tracker (CGT) based direct model reference adaptive control to reduce the adverse effect of the bounded uncertainties on atmospheric density and aerodynamic coefficients. Firstly, the nominal drag acceleration profile meeting a variety of constraints is planned off-line in the longitudinal plane as the reference model to track. Then, the CGT based direct model reference adaptive controller and the feed-forward compensator are designed to robustly track the aforementioned reference drag acceleration profile and to effectively reduce the downrange error. Afterwards, the heading alignment logic is adopted in the lateral plane to reduce the crossrange error. Finally, the validity of the guidance algorithm proposed in this paper is confirmed by Monte Carlo simulation analysis.
ERIC Educational Resources Information Center
Thorne, John C.; Coggins, Truman
2008-01-01
Background: Foetal Alcohol Spectrum Disorders (FASD) include the range of disabilities that occur in children exposed to alcohol during pregnancy, with Foetal Alcohol Syndrome (FAS) on the severe end of the spectrum. Clinical research has documented a range of cognitive, social, and communication deficits in FASD and it indicates the need for…
NASA Astrophysics Data System (ADS)
Lee, J.
2013-12-01
Ground-Based Augmentation Systems (GBAS) support aircraft precision approach and landing by providing differential GPS corrections to aviation users. For GBAS applications, most of ionospheric errors are removed by applying the differential corrections. However, ionospheric correction errors may exist due to ionosphere spatial decorrelation between GBAS ground facility and users. Thus, the standard deviation of ionosphere spatial decorrelation (σvig) is estimated and included in the computation of error bounds on user position solution. The σvig of 4mm/km, derived for the Conterminous United States (CONUS), bounds one-sigma ionospheric spatial gradients under nominal conditions (including active, but not stormy condition) with an adequate safety margin [1]. The conservatism residing in the current σvig by fixing it to a constant value for all non-stormy conditions could be mitigated by subdividing ionospheric conditions into several classes and using different σvig for each class. This new concept, real-time σvig adaptation, will be possible if the level of ionospheric activity can be well classified based on space weather intensity. This paper studies correlation between the statistics of nominal ionospheric spatial gradients and space weather indices. The analysis was carried out using two sets of data collected from Continuous Operating Reference Station (CORS) Network; 9 consecutive (nominal and ionospherically active) days in 2004 and 19 consecutive (relatively 'quiet') days in 2010. Precise ionospheric delay estimates are obtained using the simplified truth processing method and vertical ionospheric gradients are computed using the well-known 'station pair method' [2]. The remaining biases which include carrier-phase leveling errors and Inter-frequency Bias (IFB) calibration errors are reduced by applying linear slip detection thresholds. The σvig was inflated to overbound the distribution of vertical ionospheric gradients with the required confidence level. Using the daily maximum values of σvig, day-to-day variations of spatial gradients are compared to those of two space weather indices; Disturbance, Storm Time (Dst) index and Interplanetary Magnetic Field Bz (IMF Bz). The day-to-day variations of both space weather indices showed a good agreement with those of daily maximum σvig. The results demonstrate that ionospheric gradient statistics are highly correlated with space weather indices on nominal and off-nominal days. Further investigation on this relationship would facilitate prediction of upcoming ionospheric behavior based on space weather information and adjusting σvig in real time. Consequently it will improve GBAS availability by adding external information to operation. [1] Lee, J., S. Pullen, S. Datta-Barua, and P. Enge (2007), Assessment of ionosphere spatial decorrelation for GPS-based aircraft landing systems, J. Aircraft, 44(5), 1662-1669, doi:10.2514/1.28199. [2] Jung, S., and J. Lee (2012), Long-term ionospheric anomaly monitoring for ground based augmentation systems, Radio Sci., 47, RS4006, doi:10.1029/2012RS005016.
On the representation and estimation of spatial uncertainty. [for mobile robot
NASA Technical Reports Server (NTRS)
Smith, Randall C.; Cheeseman, Peter
1987-01-01
This paper describes a general method for estimating the nominal relationship and expected error (covariance) between coordinate frames representing the relative locations of objects. The frames may be known only indirectly through a series of spatial relationships, each with its associated error, arising from diverse causes, including positioning errors, measurement errors, or tolerances in part dimensions. This estimation method can be used to answer such questions as whether a camera attached to a robot is likely to have a particular reference object in its field of view. The calculated estimates agree well with those from an independent Monte Carlo simulation. The method makes it possible to decide in advance whether an uncertain relationship is known accurately enough for some task and, if not, how much of an improvement in locational knowledge a proposed sensor will provide. The method presented can be generalized to six degrees of freedom and provides a practical means of estimating the relationships (position and orientation) among objects, as well as estimating the uncertainty associated with the relationships.
ERIC Educational Resources Information Center
Pine, Julian M.; Rowland, Caroline F.; Lieven, Elena V. M.; Theakston, Anna L.
2005-01-01
One of the most influential recent accounts of pronoun case-marking errors in young children's speech is Schutze & Wexler's (1996) Agreement/Tense Omission Model (ATOM). The ATOM predicts that the rate of agreeing verbs with non-nominative subjects will be so low that such errors can be reasonably disregarded as noise in the data. The present…
Comparative evaluation of ultrasound scanner accuracy in distance measurement
NASA Astrophysics Data System (ADS)
Branca, F. P.; Sciuto, S. A.; Scorza, A.
2012-10-01
The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.
Modeling and control for closed environment plant production systems
NASA Technical Reports Server (NTRS)
Fleisher, David H.; Ting, K. C.; Janes, H. W. (Principal Investigator)
2002-01-01
A computer program was developed to study multiple crop production and control in controlled environment plant production systems. The program simulates crop growth and development under nominal and off-nominal environments. Time-series crop models for wheat (Triticum aestivum), soybean (Glycine max), and white potato (Solanum tuberosum) are integrated with a model-based predictive controller. The controller evaluates and compensates for effects of environmental disturbances on crop production scheduling. The crop models consist of a set of nonlinear polynomial equations, six for each crop, developed using multivariate polynomial regression (MPR). Simulated data from DSSAT crop models, previously modified for crop production in controlled environments with hydroponics under elevated atmospheric carbon dioxide concentration, were used for the MPR fitting. The model-based predictive controller adjusts light intensity, air temperature, and carbon dioxide concentration set points in response to environmental perturbations. Control signals are determined from minimization of a cost function, which is based on the weighted control effort and squared-error between the system response and desired reference signal.
Exploring Reference Group Effects on Teachers' Nominations of Gifted Students
ERIC Educational Resources Information Center
Rothenbusch, Sandra; Zettler, Ingo; Voss, Thamar; Lösch, Thomas; Trautwein, Ulrich
2016-01-01
Teachers are often asked to nominate students for enrichment programs for gifted children, and studies have repeatedly indicated that students' intelligence is related to their likelihood of being nominated as gifted. However, it is unknown whether class-average levels of intelligence influence teachers' nominations as suggested by theory--and…
Aquarius L-Band Radiometers Calibration Using Cold Sky Observations
NASA Technical Reports Server (NTRS)
Dinnat, Emmanuel P.; Le Vine, David M.; Piepmeier, Jeffrey R.; Brown, Shannon T.; Hong, Liang
2015-01-01
An important element in the calibration plan for the Aquarius radiometers is to look at the cold sky. This involves rotating the satellite 180 degrees from its nominal Earth viewing configuration to point the main beams at the celestial sky. At L-band, the cold sky provides a stable, well-characterized scene to be used as a calibration reference. This paper describes the cold sky calibration for Aquarius and how it is used as part of the absolute calibration. Cold sky observations helped establish the radiometer bias, by correcting for an error in the spillover lobe of the antenna pattern, and monitor the long-term radiometer drift.
NASA Astrophysics Data System (ADS)
Swastika, Windra
2017-03-01
A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.
A General Closed-Form Solution for the Lunar Reconnaissance Orbiter (LRO) Antenna Pointing System
NASA Technical Reports Server (NTRS)
Shah, Neerav; Chen, J. Roger; Hashmall, Joseph A.
2010-01-01
The National Aeronautics and Space Administration s (NASA) Lunar Reconnaissance Orbiter (LRO) launched on June 18, 2009 from the Cape Canaveral Air Force Station aboard an Atlas V launch vehicle into a direct insertion trajectory to the Moon LRO, designed, built, and operated by the NASA Goddard Space Flight Center in Greenbelt, MD, is gathering crucial data on the lunar environment that will help astronauts prepare for long-duration lunar expeditions. During the mission s nominal life of one year its six instruments and one technology demonstrator will find safe landing site, locate potential resources, characterize the radiation environment and test new technology. To date, LRO has been operating well within the bounds of its requirements and has been collecting excellent science data images taken from the LRO Camera Narrow Angle Camera (LROC NAC) of the Apollo landing sites have appeared on cable news networks. A significant amount of information on LRO s science instruments is provided at the LRO mission webpage. LRO s Attitude Control System (ACS), in addition to controlling the orientation of the spacecraft is also responsible for pointing the High Gain Antenna (HGA). A dual-axis (or double-gimbaled) antenna, deployed on a meter-long boom, is required to point at a selected Earth ground station. Due to signal loss over the distance from the Moon to Earth, pointing precision for the antenna system is very tight. Since the HGA has to be deployed in spaceflight, its exact geometry relative to the spacecraft body is uncertain. In addition, thermal distortions and mechanical errors/tolerances must be characterized and removed to realize the greatest gain from the antenna system. These reasons necessitate the need for an in-flight calibration. Once in orbit around the moon, a series of attitude maneuvers was conducted to provide data needed to determine optimal parameters to load onboard, which would account for the environmental and mechanical errors at any antenna orientation. The nominal geometry for the HGA involves an outer gimbal axis that is exactly perpendicular to the inner gimbal axis, and a target direction that is exactly perpendicular to the outer gimbal axis. For this nominal geometry, closed-form solutions of the desired gimbal angles are simple to get for a desired target direction specified in the spacecraft body fame. If the gimbal axes and the antenna boresight are slightly misaligned, the nominal closed-form solution is not sufficiently accurate for computing the gimbal angles needed to point at a target. In this situation, either a general closed-form solution has to be developed for a mechanism with general geometries, or a correction scheme has to be applied to the nominal closed-form solutions. The latter has been adopted for Solar Dynamics Observatory (SDO) as can be seen in Reference 1, and the former has been used for LRO. The advantage of the general closed-form solution is the use of a small number of parameters for the correction of nominal solutions, especially in the regions near singularities. Singularities here refer to cases when the nominal closed-form solutions have two or more solutions. Algorithm complexity, however, is the disadvantage of the general closed-form solution.
Improved UTE-based attenuation correction for cranial PET-MR using dynamic magnetic field monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aitken, A. P.; Giese, D.; Tsoumpas, C.
2014-01-15
Purpose: Ultrashort echo time (UTE) MRI has been proposed as a way to produce segmented attenuation maps for PET, as it provides contrast between bone, air, and soft tissue. However, UTE sequences require samples to be acquired during rapidly changing gradient fields, which makes the resulting images prone to eddy current artifacts. In this work it is demonstrated that this can lead to misclassification of tissues in segmented attenuation maps (AC maps) and that these effects can be corrected for by measuring the true k-space trajectories using a magnetic field camera. Methods: The k-space trajectories during a dual echo UTEmore » sequence were measured using a dynamic magnetic field camera. UTE images were reconstructed using nominal trajectories and again using the measured trajectories. A numerical phantom was used to demonstrate the effect of reconstructing with incorrect trajectories. Images of an ovine leg phantom were reconstructed and segmented and the resulting attenuation maps were compared to a segmented map derived from a CT scan of the same phantom, using the Dice similarity measure. The feasibility of the proposed method was demonstrated inin vivo cranial imaging in five healthy volunteers. Simulated PET data were generated for one volunteer to show the impact of misclassifications on the PET reconstruction. Results: Images of the numerical phantom exhibited blurring and edge artifacts on the bone–tissue and air–tissue interfaces when nominal k-space trajectories were used, leading to misclassification of soft tissue as bone and misclassification of bone as air. Images of the tissue phantom and thein vivo cranial images exhibited the same artifacts. The artifacts were greatly reduced when the measured trajectories were used. For the tissue phantom, the Dice coefficient for bone in MR relative to CT was 0.616 using the nominal trajectories and 0.814 using the measured trajectories. The Dice coefficients for soft tissue were 0.933 and 0.934 for the nominal and measured cases, respectively. For air the corresponding figures were 0.991 and 0.993. Compared to an unattenuated reference image, the mean error in simulated PET uptake in the brain was 9.16% when AC maps derived from nominal trajectories was used, with errors in the SUV{sub max} for simulated lesions in the range of 7.17%–12.19%. Corresponding figures when AC maps derived from measured trajectories were used were 0.34% (mean error) and −0.21% to +1.81% (lesions). Conclusions: Eddy current artifacts in UTE imaging can be corrected for by measuring the true k-space trajectories during a calibration scan and using them in subsequent image reconstructions. This improves the accuracy of segmented PET attenuation maps derived from UTE sequences and subsequent PET reconstruction.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-14
... the Nominating & Governance Committee; (ii) amend the NASDAQ OMX PHLX, Inc. reference to reflect a... Nominating Committee also conducts certain governance functions such as consulting with the Board and the... ``Nominating Committee'' in the By-Laws, to the ``Nominating & Governance Committee'' so that the title of the...
Drug screening in medical examiner casework by high-resolution mass spectrometry (UPLC-MSE-TOF).
Rosano, Thomas G; Wood, Michelle; Ihenetu, Kenneth; Swift, Thomas A
2013-10-01
Postmortem drug findings yield important analytical evidence in medical examiner casework, and chromatography coupled with nominal mass spectrometry (MS) serves as the predominant general unknown screening approach. We report screening by ultra performance liquid chromatography (UPLC) coupled with hybrid quadrupole time-of-flight mass spectrometer (MS(E)-TOF), with comparison to previously validated nominal mass UPLC-MS and UPLC-MS-MS methods. UPLC-MS(E)-TOF screening for over 950 toxicologically relevant drugs and metabolites was performed in a full-spectrum (m/z 50-1,000) mode using an MS(E) acquisition of both molecular and fragment ion data at low (6 eV) and ramped (10-40 eV) collision energies. Mass error averaged 1.27 ppm for a large panel of reference drugs and metabolites. The limit of detection by UPLC-MS(E)-TOF ranges from 0.5 to 100 ng/mL and compares closely with UPLC-MS-MS. The influence of column recovery and matrix effect on the limit of detection was demonstrated with ion suppression by matrix components correlating closely with early and late eluting reference analytes. Drug and metabolite findings by UPLC-MS(E)-TOF were compared with UPLC-MS and UPLC-MS-MS analyses of postmortem blood in 300 medical examiner cases. Positive findings by all methods totaled 1,528, with a detection rate of 57% by UPLC-MS, 72% by UPLC-MS-MS and 80% by combined UPLC-MS and UPLC-MS-MS screening. Compared with nominal mass screening methods, UPLC-MS(E)-TOF screening resulted in a 99% detection rate and, in addition, offered the potential for the detection of nontargeted analytes via high-resolution acquisition of molecular and fragment ion data.
de Castro, Eduardo da S G; Cassella, Ricardo J
2016-05-15
Reference methods for quality control of vaccines usually require treatment of the samples before analysis. These procedures are expensive, time-consuming, unhealthy and require careful manipulation of the sample, making them a potential source of analytical errors. This work proposes a novel method for the quality control of thermostabilizer samples of the yellow fever vaccine employing attenuated total reflectance Fourier transform infrared spectrometry (ATR-FTIR). The main advantage of the proposed method is the possibility of direct determination of the analytes (sodium glutamate and sorbitol) without any pretreatment of the samples. Operational parameters of the FTIR technique, such as the number of accumulated scans and nominal resolution, were evaluated. The best conditions for sodium glutamate were achieved when 64 scans were accumulated using a nominal resolution of 4 cm(-1). The measurements for sodium glutamate were performed at 1347 cm(-1) (baseline correction between 1322 and 1369 cm(-1)). In the case of sorbitol, the measurements were done at 890cm(-1) (baseline correction between 825 and 910 cm(-1)) using a nominal resolution of 2 cm(-1) with 32 accumulated scans. In both cases, the quantitative variable was the band height. Recovery tests were performed in order to evaluate the accuracy of the method and recovery percentages in the range 93-106% were obtained. Also, the methods were compared with reference methods and no statistical differences were observed. The limits of detection and quantification for sodium glutamate were 0.20 and 0.62% (m/v), respectively, whereas for sorbitol they were 1 and 3.3% (m/v), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Navigation of the autonomous vehicle reverse movement
NASA Astrophysics Data System (ADS)
Rachkov, M.; Petukhov, S.
2018-02-01
The paper presents a mathematical formulation of the vehicle reverse motion along a multi-link polygonal trajectory consisting of rectilinear segments interconnected by nodal points. Relevance of the problem is caused by the need to solve a number of tasks: to save the vehicle in the event of а communication break by returning along the trajectory already passed, to avoid a turn on the ground in constrained obstacles or dangerous conditions, or a partial return stroke for the subsequent bypass of the obstacle and continuation of the forward movement. The method of navigation with direct movement assumes that the reverse path is elaborated by using landmarks. To measure landmarks on board, a block of cameras is placed on a vehicle controlled by the operator through the radio channel. Errors in estimating deviation from the nominal trajectory of motion are determined using the multidimensional correlation analysis apparatus based on the dynamics of a lateral deviation error and a vehicle speed error. The result of the experiment showed a relatively high accuracy in determining the state vector that provides the vehicle reverse motion relative to the reference trajectory with a practically acceptable error while returning to the start point.
Orion Exploration Flight Test-1 Contingency Drogue Deploy Velocity Trigger
NASA Technical Reports Server (NTRS)
Gay, Robert S.; Stochowiak, Susan; Smith, Kelly
2013-01-01
As a backup to the GPS-aided Kalman filter and the Barometric altimeter, an "adjusted" velocity trigger is used during entry to trigger the chain of events that leads to drogue chute deploy for the Orion Multi-Purpose Crew Vehicle (MPCV) Exploration Flight Test-1 (EFT-1). Even though this scenario is multiple failures deep, the Orion Guidance, Navigation, and Control (GN&C) software makes use of a clever technique that was taken from the Mars Science Laboratory (MSL) program, which recently successfully landing the Curiosity rover on Mars. MSL used this technique to jettison the heat shield at the proper time during descent. Originally, Orion use the un-adjusted navigated velocity, but the removal of the Star Tracker to save costs for EFT-1, increased attitude errors which increased inertial propagation errors to the point where the un-adjusted velocity caused altitude dispersions at drogue deploy to be too large. Thus, to reduce dispersions, the velocity vector is projected onto a "reference" vector that represents the nominal "truth" vector at the desired point in the trajectory. Because the navigation errors are largely perpendicular to the truth vector, this projection significantly reduces dispersions in the velocity magnitude. This paper will detail the evolution of this trigger method for the Orion project and cover the various methods tested to determine the reference "truth" vector; and at what point in the trajectory it should be computed.
Hybrid adaptive ascent flight control for a flexible launch vehicle
NASA Astrophysics Data System (ADS)
Lefevre, Brian D.
For the purpose of maintaining dynamic stability and improving guidance command tracking performance under off-nominal flight conditions, a hybrid adaptive control scheme is selected and modified for use as a launch vehicle flight controller. This architecture merges a model reference adaptive approach, which utilizes both direct and indirect adaptive elements, with a classical dynamic inversion controller. This structure is chosen for a number of reasons: the properties of the reference model can be easily adjusted to tune the desired handling qualities of the spacecraft, the indirect adaptive element (which consists of an online parameter identification algorithm) continually refines the estimates of the evolving characteristic parameters utilized in the dynamic inversion, and the direct adaptive element (which consists of a neural network) augments the linear feedback signal to compensate for any nonlinearities in the vehicle dynamics. The combination of these elements enables the control system to retain the nonlinear capabilities of an adaptive network while relying heavily on the linear portion of the feedback signal to dictate the dynamic response under most operating conditions. To begin the analysis, the ascent dynamics of a launch vehicle with a single 1st stage rocket motor (typical of the Ares 1 spacecraft) are characterized. The dynamics are then linearized with assumptions that are appropriate for a launch vehicle, so that the resulting equations may be inverted by the flight controller in order to compute the control signals necessary to generate the desired response from the vehicle. Next, the development of the hybrid adaptive launch vehicle ascent flight control architecture is discussed in detail. Alterations of the generic hybrid adaptive control architecture include the incorporation of a command conversion operation which transforms guidance input from quaternion form (as provided by NASA) to the body-fixed angular rate commands needed by the hybrid adaptive flight controller, development of a Newton's method based online parameter update that is modified to include a step size which regulates the rate of change in the parameter estimates, comparison of the modified Newton's method and recursive least squares online parameter update algorithms, modification of the neural network's input structure to accommodate for the nature of the nonlinearities present in a launch vehicle's ascent flight, examination of both tracking error based and modeling error based neural network weight update laws, and integration of feedback filters for the purpose of preventing harmful interaction between the flight control system and flexible structural modes. To validate the hybrid adaptive controller, a high-fidelity Ares I ascent flight simulator and a classical gain-scheduled proportional-integral-derivative (PID) ascent flight controller were obtained from the NASA Marshall Space Flight Center. The classical PID flight controller is used as a benchmark when analyzing the performance of the hybrid adaptive flight controller. Simulations are conducted which model both nominal and off-nominal flight conditions with structural flexibility of the vehicle either enabled or disabled. First, rigid body ascent simulations are performed with the hybrid adaptive controller under nominal flight conditions for the purpose of selecting the update laws which drive the indirect and direct adaptive components. With the neural network disabled, the results revealed that the recursive least squares online parameter update caused high frequency oscillations to appear in the engine gimbal commands. This is highly undesirable for long and slender launch vehicles, such as the Ares I, because such oscillation of the rocket nozzle could excite unstable structural flex modes. In contrast, the modified Newton's method online parameter update produced smooth control signals and was thus selected for use in the hybrid adaptive launch vehicle flight controller. In the simulations where the online parameter identification algorithm was disabled, the tracking error based neural network weight update law forced the network's output to diverge despite repeated reductions of the adaptive learning rate. As a result, the modeling error based neural network weight update law (which generated bounded signals) is utilized by the hybrid adaptive controller in all subsequent simulations. Comparing the PID and hybrid adaptive flight controllers under nominal flight conditions in rigid body ascent simulations showed that their tracking error magnitudes are similar for a period of time during the middle of the ascent phase. Though the PID controller performs better for a short interval around the 20 second mark, the hybrid adaptive controller performs far better from roughly 70 to 120 seconds. Elevating the aerodynamic loads by increasing the force and moment coefficients produced results very similar to the nominal case. However, applying a 5% or 10% thrust reduction to the first stage rocket motor causes the tracking error magnitude observed by the PID controller to be significantly elevated and diverge rapidly as the simulation concludes. In contrast, the hybrid adaptive controller steadily maintains smaller errors (often less than 50% of the corresponding PID value). Under the same sets of flight conditions with flexibility enabled, the results exhibit similar trends with the hybrid adaptive controller performing even better in each case. Again, the reduction of the first stage rocket motor's thrust clearly illustrated the superior robustness of the hybrid adaptive flight controller.
Attitude Control System Design for the Solar Dynamics Observatory
NASA Technical Reports Server (NTRS)
Starin, Scott R.; Bourkland, Kristin L.; Kuo-Chia, Liu; Mason, Paul A. C.; Vess, Melissa F.; Andrews, Stephen F.; Morgenstern, Wendy M.
2005-01-01
The Solar Dynamics Observatory mission, part of the Living With a Star program, will place a geosynchronous satellite in orbit to observe the Sun and relay data to a dedicated ground station at all times. SDO remains Sun- pointing throughout most of its mission for the instruments to take measurements of the Sun. The SDO attitude control system is a single-fault tolerant design. Its fully redundant attitude sensor complement includes 16 coarse Sun sensors, a digital Sun sensor, 3 two-axis inertial reference units, 2 star trackers, and 4 guide telescopes. Attitude actuation is performed using 4 reaction wheels and 8 thrusters, and a single main engine nominally provides velocity-change thrust. The attitude control software has five nominal control modes-3 wheel-based modes and 2 thruster-based modes. A wheel-based Safehold running in the attitude control electronics box improves the robustness of the system as a whole. All six modes are designed on the same basic proportional-integral-derivative attitude error structure, with more robust modes setting their integral gains to zero. The paper details the mode designs and their uses.
Analysis of Position Error Headway Protection
DOT National Transportation Integrated Search
1975-07-01
An analysis is developed to determine safe headway on PRT systems that use point-follower control. Periodic measurements of the position error relative to a nominal trajectory provide warning against the hazards of overspeed and unexpected stop. A co...
Validation of geometric accuracy of Global Land Survey (GLS) 2000 data
Rengarajan, Rajagopalan; Sampath, Aparajithan; Storey, James C.; Choate, Michael J.
2015-01-01
The Global Land Survey (GLS) 2000 data were generated from Geocover™ 2000 data with the aim of producing a global data set of accuracy better than 25 m Root Mean Square Error (RMSE). An assessment and validation of accuracy of GLS 2000 data set, and its co-registration with Geocover™ 2000 data set is presented here. Since the availability of global data sets that have higher nominal accuracy than the GLS 2000 is a concern, the data sets were assessed in three tiers. In the first tier, the data were compared with the Geocover™ 2000 data. This comparison provided a means of localizing regions of higher differences. In the second tier, the GLS 2000 data were compared with systematically corrected Landsat-7 scenes that were obtained in a time period when the spacecraft pointing information was extremely accurate. These comparisons localize regions where the data are consistently off, which may indicate regions of higher errors. The third tier consisted of comparing the GLS 2000 data against higher accuracy reference data. The reference data were the Digital Ortho Quads over the United States, orthorectified SPOT data over Australia, and high accuracy check points obtained using triangulation bundle adjustment of Landsat-7 images over selected sites around the world. The study reveals that the geometric errors in Geocover™ 2000 data have been rectified in GLS 2000 data, and that the accuracy of GLS 2000 data can be expected to be better than 25 m RMSE for most of its constituent scenes.
Torus Approach in Gravity Field Determination from Simulated GOCE Gravity Gradients
NASA Astrophysics Data System (ADS)
Liu, Huanling; Wen, Hanjiang; Xu, Xinyu; Zhu, Guangbin
2016-08-01
In Torus approach, observations are projected to the nominal orbits with constant radius and inclination, lumped coefficients provides a linear relationship between observations and spherical harmonic coefficients. Based on the relationship, two-dimensional FFT and block-diagonal least-squares adjustment are used to recover Earth's gravity field model. The Earth's gravity field model complete to degree and order 200 is recovered using simulated satellite gravity gradients on a torus grid, and the degree median error is smaller than 10-18, which shows the effectiveness of Torus approach. EGM2008 is employed as a reference model and the gravity field model is resolved using the simulated observations without noise given on GOCE orbits of 61 days. The error from reduction and interpolation can be mitigated by iterations. Due to polar gap, the precision of low-order coefficients is lower. Without considering these coefficients the maximum geoid degree error and cumulative error are 0.022mm and 0.099mm, respectively. The Earth's gravity field model is also recovered from simulated observations with white noise 5mE/Hz1/2, which is compared to that from direct method. In conclusion, it is demonstrated that Torus approach is a valid method for processing massive amount of GOCE gravity gradients.
NASA Astrophysics Data System (ADS)
Lin, Xiaomei; Chang, Penghui; Chen, Gehua; Lin, Jingjun; Liu, Ruixiang; Yang, Hao
2015-11-01
Our recent work has determined the carbon content in a melting ferroalloy by laser-induced breakdown spectroscopy (LIBS). The emission spectrum of carbon that we obtained in the laboratory is suitable for carbon content determination in a melting ferroalloy but we cannot get the expected results when this method is applied in industrial conditions: there is always an unacceptable error of around 4% between the actual value and the measured value. By comparing the measurement condition in the industrial condition with that in the laboratory, the results show that the temperature of the molten ferroalloy samples to be measured is constant under laboratory conditions while it decreases gradually under industrial conditions. However, temperature has a considerable impact on the measurement of carbon content, and this is the reason why there is always an error between the actual value and the measured value. In this paper we compare the errors of carbon content determination at different temperatures to find the optimum reference temperature range which can fit the requirements better in industrial conditions and, hence, make the measurement more accurate. The results of the comparative analyses show that the measured value of the carbon content in molten state (1620 K) is consistent with the nominal value of the solid standard sample (error within 0.7%). In fact, it is the most accurate measurement in the solid state. Based on this, we can effectively improve the accuracy of measurements in laboratory and can provide a reference standard of temperature for the measurement in industrial conditions. supported by National Natural Science Foundation of China (No. 51374040), and supported by Laser-Induced Plasma Spectroscopy Equipment Development and Application, China (No. 2014YQ120351)
A frequency-domain estimator for use in adaptive control systems
NASA Technical Reports Server (NTRS)
Lamaire, Richard O.; Valavani, Lena; Athans, Michael; Stein, Gunter
1991-01-01
This paper presents a frequency-domain estimator that can identify both a parametrized nominal model of a plant as well as a frequency-domain bounding function on the modeling error associated with this nominal model. This estimator, which we call a robust estimator, can be used in conjunction with a robust control-law redesign algorithm to form a robust adaptive controller.
Robustness of Type I Error and Power in Set Correlation Analysis of Contingency Tables.
ERIC Educational Resources Information Center
Cohen, Jacob; Nee, John C. M.
1990-01-01
The analysis of contingency tables via set correlation allows the assessment of subhypotheses involving contrast functions of the categories of the nominal scales. The robustness of such methods with regard to Type I error and statistical power was studied via a Monte Carlo experiment. (TJH)
Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods
Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.
2012-01-01
Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
NASA Technical Reports Server (NTRS)
Patrick, Sean; Oliver, Emerson
2018-01-01
One of the SLS Navigation System's key performance requirements is a constraint on the payload system's delta-v allocation to correct for insertion errors due to vehicle state uncertainty at payload separation. The SLS navigation team has developed a Delta-Delta-V analysis approach to assess the effect on trajectory correction maneuver (TCM) design needed to correct for navigation errors. This approach differs from traditional covariance analysis based methods and makes no assumptions with regard to the propagation of the state dynamics. This allows for consideration of non-linearity in the propagation of state uncertainties. The Delta-Delta-V analysis approach re-optimizes perturbed SLS mission trajectories by varying key mission states in accordance with an assumed state error. The state error is developed from detailed vehicle 6-DOF Monte Carlo analysis or generated using covariance analysis. These perturbed trajectories are compared to a nominal trajectory to determine necessary TCM design. To implement this analysis approach, a tool set was developed which combines the functionality of a 3-DOF trajectory optimization tool, Copernicus, and a detailed 6-DOF vehicle simulation tool, Marshall Aerospace Vehicle Representation in C (MAVERIC). In addition to delta-v allocation constraints on SLS navigation performance, SLS mission requirement dictate successful upper stage disposal. Due to engine and propellant constraints, the SLS Exploration Upper Stage (EUS) must dispose into heliocentric space by means of a lunar fly-by maneuver. As with payload delta-v allocation, upper stage disposal maneuvers must place the EUS on a trajectory that maximizes the probability of achieving a heliocentric orbit post Lunar fly-by considering all sources of vehicle state uncertainty prior to the maneuver. To ensure disposal, the SLS navigation team has developed an analysis approach to derive optimal disposal guidance targets. This approach maximizes the state error covariance prior to the maneuver to develop and re-optimize a nominal disposal maneuver (DM) target that, if achieved, would maximize the potential for successful upper stage disposal. For EUS disposal analysis, a set of two tools was developed. The first considers only the nominal pre-disposal maneuver state, vehicle constraints, and an a priori estimate of the state error covariance. In the analysis, the optimal nominal disposal target is determined. This is performed by re-formulating the trajectory optimization to consider constraints on the eigenvectors of the error ellipse applied to the nominal trajectory. A bisection search methodology is implemented in the tool to refine these dispersions resulting in the maximum dispersion feasible for successful disposal via lunar fly-by. Success is defined based on the probability that the vehicle will not impact the lunar surface and will achieve a characteristic energy (C3) relative to the Earth such that it is no longer in the Earth-Moon system. The second tool propagates post-disposal maneuver states to determine the success of disposal for provided trajectory achieved states. This is performed using the optimized nominal target within the 6-DOF vehicle simulation. This paper will discuss the application of the Delta-Delta-V analysis approach for performance evaluation as well as trajectory re-optimization so as to demonstrate the system's capability in meeting performance constraints. Additionally, further discussion of the implementation of assessing disposal analysis will be provided.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
NASA Technical Reports Server (NTRS)
Beck, S. M.
1975-01-01
A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.
Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L
2010-04-01
The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.
Wang, Ding; Liu, Derong; Zhang, Yun; Li, Hongyi
2018-01-01
In this paper, we aim to tackle the neural robust tracking control problem for a class of nonlinear systems using the adaptive critic technique. The main contribution is that a neural-network-based robust tracking control scheme is established for nonlinear systems involving matched uncertainties. The augmented system considering the tracking error and the reference trajectory is formulated and then addressed under adaptive critic optimal control formulation, where the initial stabilizing controller is not needed. The approximate control law is derived via solving the Hamilton-Jacobi-Bellman equation related to the nominal augmented system, followed by closed-loop stability analysis. The robust tracking control performance is guaranteed theoretically via Lyapunov approach and also verified through simulation illustration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Including robustness in multi-criteria optimization for intensity-modulated proton therapy
NASA Astrophysics Data System (ADS)
Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David
2012-02-01
We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for each Pareto optimal plan takes less than 5 min on a standard computer, making a computationally friendly interface possible to the planner. In conclusion, the uncertainty pertinent to the IMPT procedure can be reduced during treatment planning by optimizing plans that emphasize different treatment objectives, including robustness, and then interactively seeking for a most-preferred one from the solution Pareto surface.
Trofimov, Alexei; Unkelbach, Jan; DeLaney, Thomas F; Bortfeld, Thomas
2012-01-01
Dose-volume histograms (DVH) are the most common tool used in the appraisal of the quality of a clinical treatment plan. However, when delivery uncertainties are present, the DVH may not always accurately describe the dose distribution actually delivered to the patient. We present a method, based on DVH formalism, to visualize the variability in the expected dosimetric outcome of a treatment plan. For a case of chordoma of the cervical spine, we compared 2 intensity modulated proton therapy plans. Treatment plan A was optimized based on dosimetric objectives alone (ie, desired target coverage, normal tissue tolerance). Plan B was created employing a published probabilistic optimization method that considered the uncertainties in patient setup and proton range in tissue. Dose distributions and DVH for both plans were calculated for the nominal delivery scenario, as well as for scenarios representing deviations from the nominal setup, and a systematic error in the estimate of range in tissue. The histograms from various scenarios were combined to create DVH bands to illustrate possible deviations from the nominal plan for the expected magnitude of setup and range errors. In the nominal scenario, the DVH from plan A showed superior dose coverage, higher dose homogeneity within the target, and improved sparing of the adjacent critical structure. However, when the dose distributions and DVH from plans A and B were recalculated for different error scenarios (eg, proton range underestimation by 3 mm), the plan quality, reflected by DVH, deteriorated significantly for plan A, while plan B was only minimally affected. In the DVH-band representation, plan A produced wider bands, reflecting its higher vulnerability to delivery errors, and uncertainty in the dosimetric outcome. The results illustrate that comparison of DVH for the nominal scenario alone does not provide any information about the relative sensitivity of dosimetric outcome to delivery uncertainties. Thus, such comparison may be misleading and may result in the selection of an inferior plan for delivery to a patient. A better-informed decision can be made if additional information about possible dosimetric variability is presented; for example, in the form of DVH bands. Copyright © 2012 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
High-speed clock recovery unit based on a phase aligner
NASA Astrophysics Data System (ADS)
Tejera, Efrain; Esper-Chain, Roberto; Tobajas, Felix; De Armas, Valentin; Sarmiento, Roberto
2003-04-01
Nowadays clock recovery units are key elements in high speed digital communication systems. For an efficient operation, this units should generate a low jitter clock based on the NRZ received data, and be tolerant to long absence of transitions. Architectures based on Hogge phase detectors have been widely used, nevertheless, they are very sensitive to jitter of the received data and they have a limited tolerance to the absence of transitions. This paper shows a novel high speed clock recovery unit based on a phase aligner. The system allows a very fast clock recovery with a low jitter, moreover, it is very resistant to absence of transitions. The design is based on eight phases obtained from a reference clock running at the nominal frequency of the received signal. This high speed reference clock is generated using a crystal and a clock multiplier unit. The phase alignment system chooses, as starting point, the two phases closest to the data phase. This allows a maximum error of 45 degrees between the clock and data signal phases. Furthermore, the system includes a feed-back loop that interpolates the chosen phases to reduce the phase error to zero. Due to the high stability and reduced tolerance of the local reference clock, the jitter obtained is highly reduced and the system becomes able to operate under long absence of transitions. This performances make this design suitable for systems such as high speed serial link technologies. This system has been designed for CMOS 0.25μm at 1.25GHz and has been verified through HSpice simulations.
1993-05-01
obtained to provide a nominal control history . The guidance law is found by minimizing the V second variation of the suboptimal trajectory...deviations from the suboptimal trajectory to required changes in the nominal control history . The deviations from the suboptimal trajectory, used together...with the precomputed gains, determines the change in the nominal control history required to meet the final constraints while minimizing the change in
A New Test of Linear Hypotheses in OLS Regression under Heteroscedasticity of Unknown Form
ERIC Educational Resources Information Center
Cai, Li; Hayes, Andrew F.
2008-01-01
When the errors in an ordinary least squares (OLS) regression model are heteroscedastic, hypothesis tests involving the regression coefficients can have Type I error rates that are far from the nominal significance level. Asymptotically, this problem can be rectified with the use of a heteroscedasticity-consistent covariance matrix (HCCM)…
Morphological Errors in Spanish Second Language Learners and Heritage Speakers
ERIC Educational Resources Information Center
Montrul, Silvina
2011-01-01
Morphological variability and the source of these errors have been intensely debated in SLA. A recurrent finding is that postpuberty second language (L2) learners often omit or use the wrong affix for nominal and verbal inflections in oral production but less so in written tasks. According to the missing surface inflection hypothesis, L2 learners…
Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2012-01-01
The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Optical truss and retroreflector modeling for picometer laser metrology
NASA Astrophysics Data System (ADS)
Hines, Braden E.
1993-09-01
Space-based astrometric interferometer concepts typically have a requirement for the measurement of the internal dimensions of the instrument to accuracies in the picometer range. While this level of resolution has already been achieved for certain special types of laser gauges, techniques for picometer-level accuracy need to be developed to enable all the various kinds of laser gauges needed for space-based interferometers. Systematic errors due to retroreflector imperfections become important as soon as the retroreflector is allowed to either translate in position or articulate in angle away from its nominal zero-point. Also, when combining several laser interferometers to form a three-dimensional laser gauge (a laser optical truss), systematic errors due to imperfect knowledge of the truss geometry are important as the retroreflector translates away from its nominal zero-point. In order to assess the astrometric performance of a proposed instrument, it is necessary to determine how the effects of an imperfect laser metrology system impact the astrometric accuracy. This paper show the development of an error propagation model from errors in the 1-D metrology measurements through the impact on the overall astrometric accuracy for OSI. Simulations are then presented based on this development which were used to define a multiplier which determines the 1-D metrology accuracy required to produce a given amount of fringe position error.
Initial Navigation Alignment of Optical Instruments on GOES-R
NASA Astrophysics Data System (ADS)
Isaacson, P.; DeLuccia, F.; Reth, A. D.; Igli, D. A.; Carter, D.
2016-12-01
The GOES-R satellite is the first in NOAA's next-generation series of geostationary weather satellites. In addition to a number of space weather sensors, it will carry two principal optical earth-observing instruments, the Advanced Baseline Imager (ABI) and the Geostationary Lightning Mapper (GLM). During launch, currently scheduled for November of 2016, the alignment of these optical instruments is anticipated to shift from that measured during pre-launch characterization. While both instruments have image navigation and registration (INR) processing algorithms to enable automated geolocation of the collected data, the launch-derived misalignment may be too large for these approaches to function without an initial adjustment to calibration parameters. The parameters that may require adjustment are for Line of Sight Motion Compensation (LMC), and the adjustments will be estimated on orbit during the post-launch test (PLT) phase. We have developed approaches to estimate the initial alignment errors for both ABI and GLM image products. Our approaches involve comparison of ABI and GLM images collected during PLT to a set of reference ("truth") images using custom image processing tools and other software (the INR Performance Assessment Tool Set, or "IPATS") being developed for other INR assessments of ABI and GLM data. IPATS is based on image correlation approaches to determine offsets between input and reference images, and these offsets are the fundamental input to our estimate of the initial alignment errors. Initial testing of our alignment algorithms on proxy datasets lends high confidence that their application will determine the initial alignment errors to within sufficient accuracy to enable the operational INR processing approaches to proceed in a nominal fashion. We will report on the algorithms, implementation approach, and status of these initial alignment tools being developed for the GOES-R ABI and GLM instruments.
Micro-mass standards to calibrate the sensitivity of mass comparators
NASA Astrophysics Data System (ADS)
Madec, Tanguy; Mann, Gaëlle; Meury, Paul-André; Rabault, Thierry
2007-10-01
In mass metrology, the standards currently used are calibrated by a chain of comparisons, performed using mass comparators, that extends ultimately from the international prototype (which is the definition of the unit of mass) to the standards in routine use. The differences measured in the course of these comparisons become smaller and smaller as the standards approach the definitions of their units, precisely because of how accurately they have been adjusted. One source of uncertainty in the determination of the difference of mass between the mass compared and the reference mass is the sensitivity error of the comparator used. Unfortunately, in the market there are no mass standards small enough (of the order of a few hundreds of micrograms) for a valid evaluation of this source of uncertainty. The users of these comparators therefore have no choice but to rely on the characteristics claimed by the makers of the comparators, or else to determine this sensitivity error at higher values (at least 1 mg) and interpolate from this result to smaller differences of mass. For this reason, the LNE decided to produce and calibrate micro-mass standards having nominal values between 100 µg and 900 µg. These standards were developed, then tested in multiple comparisons on an A5 type automatic comparator. They have since been qualified and calibrated in a weighing design, repeatedly and over an extended period of time, to establish their stability with respect to oxidation and the harmlessness of the handling and storage procedure associated with their use. Finally, the micro-standards so qualified were used to characterize the sensitivity errors of two of the LNE's mass comparators, including the one used to tie France's Platinum reference standard (Pt 35) to stainless steel and superalloy standards.
The Identity of Nominal, Verbal, and Adjectival Roots in Swahili.
ERIC Educational Resources Information Center
Der-Houssikian, Haig
1970-01-01
This article is a discussion, within the context of transformational grammar, of the formal relationships which exist between nominal, verbal, and adjectival roots in Swahili. The presentation is made with special reference to a set of subcategorizational rules which relate the given lexical categories. (Author/AMM)
Code of Federal Regulations, 2012 CFR
2012-01-01
... identical water-passage design features that use the same path of water in the highest flow mode. Batch... as 8-foot high output lamps) with recessed double contact bases of nominal overall length of 96... (commonly referred to 4-foot miniature bipin high output lamps) with miniature bipin bases of nominal...
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
1992-01-01
3-37 Table 3.2 Nominal Composition of Explosive D ............................. 3-38 Table 3.3 Nominal Composition of PBXN -6...RDX used during Phase C was PBXN -6, a mixture of RDX and Viton An* (hereafter referred to as 3 RDX), The nominal composition of this explosive is...given in table 3.3. I I I I 3-38 3 I I Table 3.3 Nominal Composition of PBXN -6. II Carbon Content (%) Ingredient Weight (%)I __ .1• •,, ,,,,i, RDX 95.0
Orbit Determination Error Analysis Results for the Triana Sun-Earth L2 Libration Point Mission
NASA Technical Reports Server (NTRS)
Marr, G.
2003-01-01
Using the NASA Goddard Space Flight Center's Orbit Determination Error Analysis System (ODEAS), orbit determination error analysis results are presented for all phases of the Triana Sun-Earth L1 libration point mission and for the science data collection phase of a future Sun-Earth L2 libration point mission. The Triana spacecraft was nominally to be released by the Space Shuttle in a low Earth orbit, and this analysis focuses on that scenario. From the release orbit a transfer trajectory insertion (TTI) maneuver performed using a solid stage would increase the velocity be approximately 3.1 km/sec sending Triana on a direct trajectory to its mission orbit. The Triana mission orbit is a Sun-Earth L1 Lissajous orbit with a Sun-Earth-vehicle (SEV) angle between 4.0 and 15.0 degrees, which would be achieved after a Lissajous orbit insertion (LOI) maneuver at approximately launch plus 6 months. Because Triana was to be launched by the Space Shuttle, TTI could potentially occur over a 16 orbit range from low Earth orbit. This analysis was performed assuming TTI was performed from a low Earth orbit with an inclination of 28.5 degrees and assuming support from a combination of three Deep Space Network (DSN) stations, Goldstone, Canberra, and Madrid and four commercial Universal Space Network (USN) stations, Alaska, Hawaii, Perth, and Santiago. These ground stations would provide coherent two-way range and range rate tracking data usable for orbit determination. Larger range and range rate errors were assumed for the USN stations. Nominally, DSN support would end at TTI+144 hours assuming there were no USN problems. Post-TTI coverage for a range of TTI longitudes for a given nominal trajectory case were analyzed. The orbit determination error analysis after the first correction maneuver would be generally applicable to any libration point mission utilizing a direct trajectory.
Error Generation in CATS-Based Agents
NASA Technical Reports Server (NTRS)
Callantine, Todd
2003-01-01
This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.
36 CFR 60.14 - Changes and revisions to properties listed in the National Register.
Code of Federal Regulations, 2010 CFR
2010-07-01
... exist for altering a boundary: Professional error in the initial nomination, loss of historic integrity... previously unrecognized significance in American history, architecture, archeology, engineering or culture...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-03
... purpose of and basis for the proposed rule change and discussed any comments it received on the proposed... the Nominating Committee to the Nominating & Governance Committee; (ii) amended the PHLX reference to... governance functions such as consulting with the Board and the management to determine the characteristics...
Effects of Test Level Discrimination and Difficulty on Answer-Copying Indices
ERIC Educational Resources Information Center
Sunbul, Onder; Yormaz, Seha
2018-01-01
In this study Type I Error and the power rates of omega (?) and GBT (generalized binomial test) indices were investigated for several nominal alpha levels and for 40 and 80-item test lengths with 10,000-examinee sample size under several test level restrictions. As a result, Type I error rates of both indices were found to be below the acceptable…
Error protection capability of space shuttle data bus designs
NASA Technical Reports Server (NTRS)
Proch, G. E.
1974-01-01
Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.
Venus radar mapper attitude reference quaternion
NASA Technical Reports Server (NTRS)
Lyons, D. T.
1986-01-01
Polynomial functions of time are used to specify the components of the quaternion which represents the nominal attitude of the Venus Radar mapper spacecraft during mapping. The following constraints must be satisfied in order to obtain acceptable synthetic array radar data: the nominal attitude function must have a large dynamic range, the sensor orientation must be known very accurately, the attitude reference function must use as little memory as possible, and the spacecraft must operate autonomously. Fitting polynomials to the components of the desired quaternion function is a straightforward method for providing a very dynamic nominal attitude using a minimum amount of on-board computer resources. Although the attitude from the polynomials may not be exactly the one requested by the radar designers, the polynomial coefficients are known, so they do not contribute to the attitude uncertainty. Frequent coefficient updates are not required, so the spacecraft can operate autonomously.
ERIC Educational Resources Information Center
Yue, Ziao Dong; Rudowicz, Elisabeth
2002-01-01
A survey of 489 undergraduates in Beijing, Guangzhou, Hong Kong, and Taipei, found politicians were nominated by all four samples as being the most creative individuals in the past and at present. Scientists and inventors ranked second in position. Artists, musicians, and businessmen were rarely nominated. (Contains references.) (Author/CR)
78 FR 9705 - National Advisory Council on the National Health Service Corps; Request for Nominations
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-11
... (HRSA) is requesting nominations to fill five (5) vacancies on the National Advisory Council (NAC) on... electronically to Njeri Jones at [email protected] or mailed to 5600 Fishers Lane, Room 13-64, Rockville, MD 20857...: The National Advisory Council on the National Health Service Corps (hereafter referred to as NAC) was...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-03
... proposed rule change and discussed any comments it received on the proposed rule change. The text of these... the Nominating & Governance Committee; (ii) amended the Phlx reference to reflect a recent conversion... Article IV, Section 4.13(h), the Nominating Committee also conducts certain governance functions such as...
NASA Astrophysics Data System (ADS)
Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang
2018-01-01
A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.
[Errors in Peruvian medical journals references].
Huamaní, Charles; Pacheco-Romero, José
2009-01-01
References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.
NASA Technical Reports Server (NTRS)
Kibler, J. F.; Green, R. N.; Young, G. R.; Kelly, M. G.
1974-01-01
A method has previously been developed to satisfy terminal rendezvous and intermediate timing constraints for planetary missions involving orbital operations. The method uses impulse factoring in which a two-impulse transfer is divided into three or four impulses which add one or two intermediate orbits. The periods of the intermediate orbits and the number of revolutions in each orbit are varied to satisfy timing constraints. Techniques are developed to retarget the orbital transfer in the presence of orbit-determination and maneuver-execution errors. Sample results indicate that the nominal transfer can be retargeted with little change in either the magnitude (Delta V) or location of the individual impulses. Additonally, the total Delta V required for the retargeted transfer is little different from that required for the nominal transfer. A digital computer program developed to implement the techniques is described.
Skin Friction at Very High Reynolds Numbers in the National Transonic Facility
NASA Technical Reports Server (NTRS)
Watson, Ralph D.; Anders, John B.; Hall, Robert M.
2006-01-01
Skin friction coefficients were derived from measurements using standard measurement technologies on an axisymmetric cylinder in the NASA Langley National Transonic Facility (NTF) at Mach numbers from 0.2 to 0.85. The pressure gradient was nominally zero, the wall temperature was nominally adiabatic, and the ratio of boundary layer thickness to model diameter within the measurement region was 0.10 to 0.14, varying with distance along the model. Reynolds numbers based on momentum thicknesses ranged from 37,000 to 605,000. The measurements approximately doubled the range of available data for flat plate skin friction coefficients. Three different techniques were used to measure surface shear. The maximum error of Preston tube measurements was estimated to be 2.5 percent, while that of Clauser derived measurements was estimated to be approximately 5 percent. Direct measurements by skin friction balance proved to be subject to large errors and were not considered reliable.
NASA Astrophysics Data System (ADS)
Zhang, Yi
2018-01-01
This study extends a set of unstructured third/fourth-order flux operators on spherical icosahedral grids from two perspectives. First, the fifth-order and sixth-order flux operators of this kind are further extended, and the nominally second-order to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the standard fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. Even-order operators show higher limiter sensitivity than the odd-order operators. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second-order and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, high-order flux operators with higher damping coefficients, which essentially behave like the Anticipated Potential Vorticity Method, present better results.
NASA Technical Reports Server (NTRS)
Gordon, Steven C.
1993-01-01
Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.
NASA Technical Reports Server (NTRS)
Stone, H. W.; Powell, R. W.
1985-01-01
A six degree of freedom simulation analysis was performed for the space shuttle orbiter during entry from Mach 8 to Mach 1.5 with realistic off nominal conditions by using the flight control systems defined by the shuttle contractor. The off nominal conditions included aerodynamic uncertainties in extrapolating from wind tunnel derived characteristics to full scale flight characteristics, uncertainties in the estimates of the reaction control system interaction with the orbiter aerodynamics, an error in deriving the angle of attack from onboard instrumentation, the failure of two of the four reaction control system thrusters on each side, and a lateral center of gravity offset coupled with vehicle and flow asymmetries. With combinations of these off nominal conditions, the flight control system performed satisfactorily. At low hypersonic speeds, a few cases exhibited unacceptable performances when errors in deriving the angle of attack from the onboard instrumentation were modeled. The orbiter was unable to maintain lateral trim for some cases between Mach 5 and Mach 2 and exhibited limit cycle tendencies or residual roll oscillations between Mach 3 and Mach 1. Piloting techniques and changes in some gains and switching times in the flight control system are suggested to help alleviate these problems.
Code of Federal Regulations, 2013 CFR
2013-01-01
... reference; see § 430.3), disregarding the provisions regarding batteries and the determination... Fahrenheit = 8.2, and e = nominal gas or oil water heater recovery efficiency = 0.75, 5.6.1.2For water... efficiency = 0.75. 5.6.2Dishwashers that operate with a nominal 120 °F inlet water temperature, only. 5.6.2...
Pilot estimates of glidepath and aim point during simulated landing approaches
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
1981-01-01
Pilot perceptions of glidepath angle and aim point were measured during simulated landings. A fixed-base cockpit simulator was used with video recordings of simulated landing approaches shown on a video projector. Pilots estimated the magnitudes of approach errors during observation without attempting to make corrections. Pilots estimated glidepath angular errors well, but had difficulty estimating aim-point errors. The data make plausible the hypothesis that pilots are little concerned with aim point during most of an approach, concentrating instead on keeping close to the nominal glidepath and trusting this technique to guide them to the proper touchdown point.
Time assignment system and its performance aboard the Hitomi satellite
NASA Astrophysics Data System (ADS)
Terada, Yukikatsu; Yamaguchi, Sunao; Sugimoto, Shigenobu; Inoue, Taku; Nakaya, Souhei; Murakami, Maika; Yabe, Seiya; Oshimizu, Kenya; Ogawa, Mina; Dotani, Tadayasu; Ishisaki, Yoshitaka; Mizushima, Kazuyo; Kominato, Takashi; Mine, Hiroaki; Hihara, Hiroki; Iwase, Kaori; Kouzu, Tomomi; Tashiro, Makoto S.; Natsukari, Chikara; Ozaki, Masanobu; Kokubun, Motohide; Takahashi, Tadayuki; Kawakami, Satoko; Kasahara, Masaru; Kumagai, Susumu; Angelini, Lorella; Witthoeft, Michael
2018-01-01
Fast timing capability in x-ray observation of astrophysical objects is one of the key properties for the ASTRO-H (Hitomi) mission. Absolute timing accuracies of 350 or 35 μs are required to achieve nominal scientific goals or to study fast variabilities of specific sources. The satellite carries a GPS receiver to obtain accurate time information, which is distributed from the central onboard computer through the large and complex SpaceWire network. The details of the time system on the hardware and software design are described. In the distribution of the time information, the propagation delays and jitters affect the timing accuracy. Six other items identified within the timing system will also contribute to absolute time error. These error items have been measured and checked on ground to ensure the time error budgets meet the mission requirements. The overall timing performance in combination with hardware performance, software algorithm, and the orbital determination accuracies, etc. under nominal conditions satisfies the mission requirements of 35 μs. This work demonstrates key points for space-use instruments in hardware and software designs and calibration measurements for fine timing accuracy on the order of microseconds for midsized satellites using the SpaceWire (IEEE1355) network.
NASA Astrophysics Data System (ADS)
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf
2015-05-01
All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.
Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.
Identification of cascade water tanks using a PWARX model
NASA Astrophysics Data System (ADS)
Mattsson, Per; Zachariah, Dave; Stoica, Petre
2018-06-01
In this paper we consider the identification of a discrete-time nonlinear dynamical model for a cascade water tank process. The proposed method starts with a nominal linear dynamical model of the system, and proceeds to model its prediction errors using a model that is piecewise affine in the data. As data is observed, the nominal model is refined into a piecewise ARX model which can capture a wide range of nonlinearities, such as the saturation in the cascade tanks. The proposed method uses a likelihood-based methodology which adaptively penalizes model complexity and directly leads to a computationally efficient implementation.
Hubble Space Telescope secondary mirror vertex radius/conic constant test
NASA Technical Reports Server (NTRS)
Parks, Robert
1991-01-01
The Hubble Space Telescope backup secondary mirror was tested to determine the vertex radius and conic constant. Three completely independent tests (to the same procedure) were performed. Similar measurements in the three tests were highly consistent. The values obtained for the vertex radius and conic constant were the nominal design values within the error bars associated with the tests. Visual examination of the interferometric data did not show any measurable zonal figure error in the secondary mirror.
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828
Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael
2014-04-01
We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.
NASA Astrophysics Data System (ADS)
Zhang, Y.
2017-12-01
The unstructured formulation of the third/fourth-order flux operators used by the Advanced Research WRF is extended twofold on spherical icosahedral grids. First, the fifth- and sixth-order flux operators of WRF are further extended, and the nominally second- to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. The fifth-order scheme also exhibits smaller sensitivity to the damping coefficient than the third-order scheme. Overall, the even-order schemes have higher limiter sensitivity than the odd-order schemes. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second- and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, flux operators with higher damping coefficients, which essentially behaves like the Anticipated Potential Vorticity Method, present optimal results.
Soulakova, Julia N; Bright, Brianna C
2013-01-01
A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Efficiency of Electric Motors B Appendix B to Subpart B of Part 431 Energy DEPARTMENT OF ENERGY ENERGY..., Subpt. B, App. B Appendix B to Subpart B of Part 431—Uniform Test Method for Measuring Nominal Full Load... Std 112-2004 Test Method B, Input-Output With Loss Segregation, (incorporated by reference, see § 431...
In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, J.E.
A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains on internal 'U-tube' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds.IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95% confidence levelmore » were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory.Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM.Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less
In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed
DOE Office of Scientific and Technical Information (OSTI.GOV)
KLEIN, JAMES
A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains an internal ''U-tube'' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds. IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95 percentmore » confidence level were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory. Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM. Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less
Symbolic Power, Robotting, and Surveilling
ERIC Educational Resources Information Center
Skovsmose, Ole
2012-01-01
Symbolic power is discussed with reference to mathematics and formal languages. Two distinctions are crucial for establishing mechanical and formal perspectives: one between appearance and reality, and one between sense and reference. These distinctions include a nomination of what to consider primary and secondary. They establish the grammatical…
Miniaturized force/torque sensor for in vivo measurements of tissue characteristics.
Hessinger, M; Pilic, T; Werthschutzky, R; Pott, P P
2016-08-01
This paper presents the development of a surgical instrument to measure interaction forces/torques with organic tissue during operation. The focus is on the design progress of the sensor element, consisting of a spoke wheel deformation element with a diameter of 12 mm and eight inhomogeneous doped piezoresistive silicon strain gauges on an integrated full-bridge assembly with an edge length of 500 μm. The silicon chips are contacted to flex-circuits via flip chip and bonded on the substrate with a single component adhesive. A signal processing board with an 18 bit serial A/D converter is integrated into the sensor. The design concept of the handheld surgical sensor device consists of an instrument coupling, the six-axis sensor, a wireless communication interface and battery. The nominal force of the sensing element is 10 N and the nominal torque is 1 N-m in all spatial directions. A first characterization of the force sensor results in a maximal systematic error of 4.92 % and random error of 1.13 %.
NASA Technical Reports Server (NTRS)
Weinstein, Bernice
1999-01-01
A strategy for detecting control law calculation errors in critical flight control computers during laboratory validation testing is presented. This paper addresses Part I of the detection strategy which involves the use of modeling of the aircraft control laws and the design of Kalman filters to predict the correct control commands. Part II of the strategy which involves the use of the predicted control commands to detect control command errors is presented in the companion paper.
Testing the non-unity of rate ratio under inverse sampling.
Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing
2007-08-01
Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
NASA Technical Reports Server (NTRS)
Stone, H. W.; Powell, R. W.
1984-01-01
A six-degree-of-freedom simulation analysis has been performed for the Space Shuttle Orbiter during entry from Mach 10 to 2.5 with realistic off-nominal conditions using the entry flight control system specified in May 1978. The off-nominal conditions included the following: (1) aerodynamic uncertainties, (2) an error in deriving the angle of attack from onboard instrumentation, (3) the failure of two of the four reaction control-system thrusters on each side, and (4) a lateral center-of-gravity offset. With combinations of the above off-nominal conditions, the control system performed satisfactorily with a few exceptions. The cases that did not exhibit satisfactory performance displayed the following main weaknesses. Marginal performance was exhibited at hypersonic speeds with a sensed angle-of-attack error of 4 deg. At supersonic speeds the system tended to be oscillatory, and the system diverged for several cases because of the inability to hold lateral trim. Several system modifications were suggested to help solve these problems and to maximize safety on the first flight: alter the elevon-trim and speed-brake schedules, delay switching to rudder trim until the rudder effectiveness is adequate, and reduce the overall rudder loop gain. These and other modifications were incorporated in a flight-control-system redesign in May 1979.
Computational aspects of geometric correction data generation in the LANDSAT-D imagery processing
NASA Technical Reports Server (NTRS)
Levine, I.
1981-01-01
A method is presented for systematic and geodetic correction data calculation. It is based on presentation of image distortions as a sum of nominal distortions and linear effects caused by variation of the spacecraft position and attitude variables from their nominals. The method may be used for both MSS and TM image data and it is incorporated into the processing by means of mostly offline calculations. Modeling shows that the maximal of the method are of the order of 5m at the worst point in a frame; the standard deviations of the average errors less than .8m.
NASA Technical Reports Server (NTRS)
Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.
1995-01-01
The application of the probabilistic risk assessment methodology to a Space Shuttle environment, particularly to the potential of losing the Shuttle during nominal operation is addressed. The different related concerns are identified and combined to determine overall program risks. A fault tree model is used to allocate system probabilities to the subsystem level. The loss of the vehicle due to failure to contain energetic gas and debris, to maintain proper propulsion and configuration is analyzed, along with the loss due to Orbiter, external tank failure, and landing failure or error.
Magnetic map of the Irish Hills and surrounding areas, San Luis Obispo County, central California
Langenheim, V.E.; Watt, J.T.; Denton, K.M.
2012-01-01
A magnetic map of the Irish Hills and surrounding areas was created as part of a cooperative research and development agreement with the Pacific Gas and Electric Company and is intended to promote further understanding of the areal geology and structure by serving as a basis for geophysical interpretations and by supporting geological mapping, mineral and water resource investigations, and other topical studies. Local spatial variations in the Earth's magnetic field (evident as anomalies on magnetic maps) reflect the distribution of magnetic minerals, primarily magnetite, in the underlying rocks. In many cases the volume content of magnetic minerals can be related to rock type, and abrupt spatial changes in the amount of magnetic minerals can be related to either lithologic or structural boundaries. Magnetic susceptibility measurements from the area indicate that bodies of serpentinite and other mafic and ultramafic rocks tend to produce the most intense magnetic anomalies, but such generalizations must be applied with caution because some sedimentary units also can produce measurable magnetic anomalies. Remanent magnetization does not appear to be a significant source for magnetic anomalies because it is an order of magnitude less than the induced magnetization. The map is a mosaic of three separate surveys collected by (1) fixed-wing aircraft at a nominal height of 305 m, (2) by boat with the sensor at sea level, and (3) by helicopter. The helicopter survey was flown by New-Sense Geophysics in October 2009 along flight lines spaced 150-m apart and at a nominal terrain clearance of 50 to 100 m. Tie lines were flown 1,500-m apart. Data were adjusted for lag error and diurnal field variations. Further processing included microleveling using the tie lines and subtraction of the reference field defined by International Geomagnetic Reference Field (IGRF) 2005 extrapolated to August 1, 2008.
NASA Astrophysics Data System (ADS)
Li, Ruijiang; Lewis, John H.; Cerviño, Laura I.; Jiang, Steve B.
2009-10-01
A major difficulty in conformal lung cancer radiotherapy is respiratory organ motion, which may cause clinically significant targeting errors. Respiratory-gated radiotherapy allows for more precise delivery of prescribed radiation dose to the tumor, while minimizing normal tissue complications. Gating based on external surrogates is limited by its lack of accuracy, while gating based on implanted fiducial markers is limited primarily by the risk of pneumothorax due to marker implantation. Techniques for fluoroscopic gating without implanted fiducial markers (markerless gating) have been developed. These techniques usually require a training fluoroscopic image dataset with marked tumor positions in the images, which limits their clinical implementation. To remove this requirement, this study presents a markerless fluoroscopic gating algorithm based on 4DCT templates. To generate gating signals, we explored the application of three similarity measures or scores between fluoroscopic images and the reference 4DCT template: un-normalized cross-correlation (CC), normalized cross-correlation (NCC) and normalized mutual information (NMI), as well as average intensity (AI) of the region of interest (ROI) in the fluoroscopic images. Performance was evaluated using fluoroscopic and 4DCT data from three lung cancer patients. On average, gating based on CC achieves the highest treatment accuracy given the same efficiency, with a high target coverage (average between 91.9% and 98.6%) for a wide range of nominal duty cycles (20-50%). AI works well for two patients out of three, but failed for the third patient due to interference from the heart. Gating based on NCC and NMI usually failed below 50% nominal duty cycle. Based on this preliminary study with three patients, we found that the proposed CC-based gating algorithm can generate accurate and robust gating signals when using 4DCT reference template. However, this observation is based on results obtained from a very limited dataset, and further investigation on a larger patient population has to be done before its clinical implementation.
SU-E-J-112: Intensity-Based Pulmonary Image Registration: An Evaluation Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Meyer, J; Sandison, G
2015-06-15
Purpose: Accurate alignment of thoracic CT images is essential for dose tracking and to safely implement adaptive radiotherapy in lung cancers. At the same time it is challenging given the highly elastic nature of lung tissue deformations. The objective of this study was to assess the performances of three state-of-art intensity-based algorithms in terms of their ability to register thoracic CT images subject to affine, barrel, and sinusoid transformation. Methods: Intensity similarity measures of the evaluated algorithms contained sum-of-squared difference (SSD), local mutual information (LMI), and residual complexity (RC). Five thoracic CT scans obtained from the EMPIRE10 challenge database weremore » included and served as reference images. Each CT dataset was distorted by realistic affine, barrel, and sinusoid transformations. Registration performances of the three algorithms were evaluated for each distortion type in terms of intensity root mean square error (IRMSE) between the reference and registered images in the lung regions. Results: For affine distortions, the three algorithms differed significantly in registration of thoracic images both visually and nominally in terms of IRMSE with a mean of 0.011 for SSD, 0.039 for RC, and 0.026 for LMI (p<0.01; Kruskal-Wallis test). For barrel distortion, the three algorithms showed nominally no significant difference in terms of IRMSE with a mean of 0.026 for SSD, 0.086 for RC, and 0.054 for LMI (p=0.16) . A significant difference was seen for sinusoid distorted thoracic CT data with mean lung IRMSE of 0.039 for SSD, 0.092 for RC, and 0.035 for LMI (p=0.02). Conclusion: Pulmonary deformations might vary to a large extent in nature in a daily clinical setting due to factors ranging from anatomy variations to respiratory motion to image quality. It can be appreciated from the results of the present study that the suitability of application of a particular algorithm for pulmonary image registration is deformation-dependent.« less
78 FR 9060 - Request for Nominations for Voting Members on Public Advisory Panels or Committees
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-07
... diagnostic assays, e.g., hepatologists; molecular biologists. Molecular and Clinical 2 June 1, 2013. Genetics.... Individuals with training in inborn errors of metabolism, biochemical and/or molecular genetics, population genetics, epidemiology and related statistical training, and clinical molecular genetics testing (e.g...
Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies
ERIC Educational Resources Information Center
Smith, Carrie E.; Cribbie, Robert A.
2013-01-01
When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…
NASA Astrophysics Data System (ADS)
Plazas, A. A.; Shapiro, C.; Kannawadi, A.; Mandelbaum, R.; Rhodes, J.; Smith, R.
2016-10-01
Weak gravitational lensing (WL) is one of the most powerful techniques to learn about the dark sector of the universe. To extract the WL signal from astronomical observations, galaxy shapes must be measured and corrected for the point-spread function (PSF) of the imaging system with extreme accuracy. Future WL missions—such as NASA’s Wide-Field Infrared Survey Telescope (WFIRST)—will use a family of hybrid near-infrared complementary metal-oxide-semiconductor detectors (HAWAII-4RG) that are untested for accurate WL measurements. Like all image sensors, these devices are subject to conversion gain nonlinearities (voltage response to collected photo-charge) that bias the shape and size of bright objects such as reference stars that are used in PSF determination. We study this type of detector nonlinearity (NL) and show how to derive requirements on it from WFIRST PSF size and ellipticity requirements. We simulate the PSF optical profiles expected for WFIRST and measure the fractional error in the PSF size (ΔR/R) and the absolute error in the PSF ellipticity (Δe) as a function of star magnitude and the NL model. For our nominal NL model (a quadratic correction), we find that, uncalibrated, NL can induce an error of ΔR/R = 1 × 10-2 and Δe 2 = 1.75 × 10-3 in the H158 bandpass for the brightest unsaturated stars in WFIRST. In addition, our simulations show that to limit the bias of ΔR/R and Δe in the H158 band to ˜10% of the estimated WFIRST error budget, the quadratic NL model parameter β must be calibrated to ˜1% and ˜2.4%, respectively. We present a fitting formula that can be used to estimate WFIRST detector NL requirements once a true PSF error budget is established.
Development of an errorable car-following driver model
NASA Astrophysics Data System (ADS)
Yang, H.-H.; Peng, H.
2010-06-01
An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.
Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong
2017-01-01
In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.
Transmitted wavefront error of a volume phase holographic grating at cryogenic temperature.
Lee, David; Taylor, Gordon D; Baillie, Thomas E C; Montgomery, David
2012-06-01
This paper describes the results of transmitted wavefront error (WFE) measurements on a volume phase holographic (VPH) grating operating at a temperature of 120 K. The VPH grating was mounted in a cryogenically compatible optical mount and tested in situ in a cryostat. The nominal root mean square (RMS) wavefront error at room temperature was 19 nm measured over a 50 mm diameter test aperture. The WFE remained at 18 nm RMS when the grating was cooled. This important result demonstrates that excellent WFE performance can be obtained with cooled VPH gratings, as required for use in future cryogenic infrared astronomical spectrometers planned for the European Extremely Large Telescope.
Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity
NASA Technical Reports Server (NTRS)
Lin, J. Y.; Mingori, D. L.
1992-01-01
We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.
2017-07-01
any of the listed reference frequencies may be used provided the requirements for compensation rate of change are satisfied. If the reference...for in present discriminator systems when the nominal response rating of the channels is employed and a reference frequency is recorded with the...Telemetry Standards, RCC Standard 106-17 Chapter 3, July 2017 3-i CHAPTER 3 Frequency Division Multiplexing Telemetry Standards Acronyms
2013-04-22
Following for Unmanned Aerial Vehicles Using L1 Adaptive Augmentation of Commercial Autopilots, Journal of Guidance, Control, and Dynamics, (3 2010): 0...Naira Hovakimyan. L1 Adaptive Controller for MIMO system with Unmatched Uncertainties using Modi?ed Piecewise Constant Adaptation Law, IEEE 51st...adaptive input nominal input with Nominal input L1 ‐based control generator This L1 adaptive control architecture uses data from the reference model
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Spacecraft Thermal and Optical Modeling Impacts on Estimation of the GRAIL Lunar Gravity Field
NASA Technical Reports Server (NTRS)
Fahnestock, Eugene G.; Park, Ryan S.; Yuan, Dah-Ning; Konopliv, Alex S.
2012-01-01
We summarize work performed involving thermo-optical modeling of the two Gravity Recovery And Interior Laboratory (GRAIL) spacecraft. We derived several reconciled spacecraft thermo-optical models having varying detail. We used the simplest in calculating SRP acceleration, and used the most detailed to calculate acceleration due to thermal re-radiation. For the latter, we used both the output of pre-launch finite-element-based thermal simulations and downlinked temperature sensor telemetry. The estimation process to recover the lunar gravity field utilizes both a nominal thermal re-radiation accleration history and an apriori error model derived from that plus an off-nominal history, which bounds parameter uncertainties as informed by sensitivity studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ofek, Y.
1994-05-01
This work describes a new technique, based on exchanging control signals between neighboring nodes, for constructing a stable and fault-tolerant global clock in a distributed system with an arbitrary topology. It is shown that it is possible to construct a global clock reference with time step that is much smaller than the propagation delay over the network's links. The synchronization algorithm ensures that the global clock tick' has a stable periodicity, and therefore, it is possible to tolerate failures of links and clocks that operate faster and/or slower than nominally specified, as well as hard failures. The approach taken inmore » this work is to generate a global clock from the ensemble of the local transmission clocks and not to directly synchronize these high-speed clocks. The steady-state algorithm, which generates the global clock, is executed in hardware by the network interface of each node. At the network interface, it is possible to measure accurately the propagation delay between neighboring nodes with a small error or uncertainty and thereby to achieve global synchronization that is proportional to these error measurements. It is shown that the local clock drift (or rate uncertainty) has only a secondary effect on the maximum global clock rate. The synchronization algorithm can tolerate any physical failure. 18 refs.« less
Brooks, Larry M; Kuhlman, Benjamin J; McKesson, Doug W; McCloskey, Leo
2013-01-01
The poor interoperability of anthocyanin glycosides measurements by two pH differential methods is documented. Adams-Harbertson, which was proposed for commercial winemaking, was compared to AOAC Official Method 2005.02 for wine. California bottled wines (Pinot Noir, Merlot, and Cabernet Sauvignon) were assayed in a collaborative study (n=105), which found mean precision of Adams-Harbertson winery versus reference measurements to be 77 +/- 20%. Maximum error is expected to be 48% for Pinot Noir, 42% for Merlot, and 34% for Cabernet Sauvignon from reproducibility RSD. Range of measurements was actually 30 to 91% for Pinot Noir. An interoperability study (n=30) found Adams-Harbertson produces measurements that are nominally 150% of the AOAC pH differential method. Large analytical chemistry differences are: AOAC method uses Beer-Lambert equation and measures absorbance at pH 1.0 and 4.5, proposed a priori by Flueki and Francis; whereas Adams-Harbertson uses "universal" standard curve and measures absorbance ad hoc at pH 1.8 and 4.9 to reduce the effects of so-called co-pigmentation. Errors relative to AOAC are produced by Adams-Harbertson standard curve over Beer-Lambert and pH 1.8 over pH 1.0. The study recommends using AOAC Official Method 2005.02 for analysis of wine anthocyanin glycosides.
The Space-Wise Global Gravity Model from GOCE Nominal Mission Data
NASA Astrophysics Data System (ADS)
Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.
2011-12-01
In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.
The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.
Thompson, Christopher Glen; Becker, Betsy Jane
2014-09-01
A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
ERIC Educational Resources Information Center
Soh, Kaycheng
2013-01-01
Discrepancies between the nominal and attained indicator weights misinform rank consumers as to the relative importance of the indicators. This may lead to unwarranted institutional judgements and misdirected actions, causing resources being wasted unnecessarily. As a follow-up to two earlier studies, data from the Academic Ranking of World…
Assessment of Person Fit Using Resampling-Based Approaches
ERIC Educational Resources Information Center
Sinharay, Sandip
2016-01-01
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
NASA Astrophysics Data System (ADS)
Zhu, Jing; Wang, Xingshu; Wang, Jun; Dai, Dongkai; Xiong, Hao
2016-10-01
Former studies have proved that the attitude error in a single-axis rotation INS/GPS integrated system tracks the high frequency component of the deflections of the vertical (DOV) with a fixed delay and tracking error. This paper analyses the influence of the nominal process noise covariance matrix Q on the tracking error as well as the response delay, and proposed a Q-adjusting technique to obtain the attitude error which can track the DOV better. Simulation results show that different settings of Q lead to different response delay and tracking error; there exists optimal Q which leads to a minimum tracking error and a comparatively short response delay; for systems with different accuracy, different Q-adjusting strategy should be adopted. In this way, the DOV estimation accuracy of using the attitude error as the observation can be improved. According to the simulation results, the DOV estimation accuracy after using the Q-adjusting technique is improved by approximate 23% and 33% respectively compared to that of the Earth Model EGM2008 and the direct attitude difference method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanyi, James A.; Nitzling, Kevin D.; Lodwick, Camille J.
2011-02-15
Purpose: Assessment of the fundamental dosimetric characteristics of a novel gated fiber-optic-coupled dosimetry system for clinical electron beam irradiation. Methods: The response of fiber-optic-coupled dosimetry system to clinical electron beam, with nominal energy range of 6-20 MeV, was evaluated for reproducibility, linearity, and output dependence on dose rate, dose per pulse, energy, and field size. The validity of the detector system's response was assessed in correspondence with a reference ionization chamber. Results: The fiber-optic-coupled dosimetry system showed little dependence to dose rate variations (coefficient of variation {+-}0.37%) and dose per pulse changes (with 0.54% of reference chamber measurements). The reproducibilitymore » of the system was {+-}0.55% for dose fractions of {approx}100 cGy. Energy dependence was within {+-}1.67% relative to the reference ionization chamber for the 6-20 MeV nominal electron beam energy range. The system exhibited excellent linear response (R{sup 2}=1.000) compared to reference ionization chamber in the dose range of 1-1000 cGy. The output factors were within {+-}0.54% of the corresponding reference ionization chamber measurements. Conclusions: The dosimetric properties of the gated fiber-optic-coupled dosimetry system compare favorably to the corresponding reference ionization chamber measurements and show considerable potential for applications in clinical electron beam radiotherapy.« less
Design and tolerance analysis of a transmission sphere by interferometer model
NASA Astrophysics Data System (ADS)
Peng, Wei-Jei; Ho, Cheng-Fong; Lin, Wen-Lung; Yu, Zong-Ru; Huang, Chien-Yao; Hsu, Wei-Yao
2015-09-01
The design of a 6-in, f/2.2 transmission sphere for Fizeau interferometry is presented in this paper. To predict the actual performance during design phase, we build an interferometer model combined with tolerance analysis in Zemax. Evaluating focus imaging is not enough for a double pass optical system. Thus, we study the interferometer model that includes system error, wavefronts reflected from reference surface and tested surface. Firstly, we generate a deformation map of the tested surface. Because of multiple configurations in Zemax, we can get the test wavefront and the reference wavefront reflected from the tested surface and the reference surface of transmission sphere respectively. According to the theory of interferometry, we subtract both wavefronts to acquire the phase of tested surface. Zernike polynomial is applied to transfer the map from phase to sag and to remove piston, tilt and power. The restored map is the same as original map; because of no system error exists. Secondly, perturbed tolerances including fabrication of lenses and assembly are considered. The system error occurs because the test and reference beam are no longer common path perfectly. The restored map is inaccurate while the system error is added. Although the system error can be subtracted by calibration, it should be still controlled within a small range to avoid calibration error. Generally the reference wavefront error including the system error and the irregularity of the reference surface of 6-in transmission sphere is measured within peak-to-valley (PV) 0.1 λ (λ=0.6328 um), which is not easy to approach. Consequently, it is necessary to predict the value of system error before manufacture. Finally, a prototype is developed and tested by a reference surface with PV 0.1 λ irregularity.
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
How accurate are quotations and references in medical journals?
de Lacey, G; Record, C; Wade, J
1985-09-28
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal.
How accurate are quotations and references in medical journals?
de Lacey, G; Record, C; Wade, J
1985-01-01
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal. PMID:3931753
Study on the calibration and optimization of double theodolites baseline
NASA Astrophysics Data System (ADS)
Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao
2018-01-01
For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.
Correspondence between Grammatical Categories and Grammatical Functions in Chinese.
ERIC Educational Resources Information Center
Tan, Fu
1993-01-01
A correspondence is shown between grammatical categories and grammatical functions in Chinese. Some syntactic properties distinguish finite verbs from nonfinite verbs, nominals from other categories, and verbs from other categories. (Contains seven references.) (LB)
Metonymy and Reference-Point Errors in Novice Programming
ERIC Educational Resources Information Center
Miller, Craig S.
2014-01-01
When learning to program, students often mistakenly refer to an element that is structurally related to the element that they intend to reference. For example, they may indicate the attribute of an object when their intention is to reference the whole object. This paper examines these reference-point errors through the context of metonymy.…
NASA Astrophysics Data System (ADS)
Derin, Y.; Anagnostou, E. N.; Anagnostou, M.; Kalogiros, J. A.; Casella, D.; Marra, A. C.; Panegrossi, G.; Sanò, P.
2017-12-01
Difficulties in representation of high rainfall variability over mountainous areas using ground based sensors make satellite remote sensing techniques attractive for hydrologic studies over these regions. Even though satellite-based rainfall measurements are quasi global and available at high spatial resolution, these products have uncertainties that necessitate use of error characterization and correction procedures based upon more accurate in situ rainfall measurements. Such measurements can be obtained from field campaigns facilitated by research quality sensors such as locally deployed weather radar and in situ weather stations. This study uses such high quality and resolution rainfall estimates derived from dual-polarization X-band radar (XPOL) observations from three field experiments in Mid-Atlantic US East Coast (NASA IPHEX experiment), the Olympic Peninsula of Washington State (NASA OLYMPEX experiment), and the Mediterranean to characterize the error characteristics of multiple passive microwave (PMW) sensor retrievals. The study first conducts an independent error analysis of the XPOL radar reference rainfall fields against in situ rain gauges and disdrometer observations available by the field experiments. Then the study evaluates different PMW precipitation products using the XPOL datasets (GR) over the three aforementioned complex terrain study areas. We extracted matchups of PMW/GR rainfall based on a matching methodology that identifies GR volume scans coincident with PMW field-of-view sampling volumes, and scaled GR parameters to the satellite products' nominal spatial resolution. The following PMW precipitation retrieval algorithms are evaluated: the NASA Goddard PROFiling algorithm (GPROF), standard and climatology-based products (V 3, 4 and 5) from four PMW sensors (SSMIS, MHS, GMI, and AMSR2), and the precipitation products based on the algorithms Cloud Dynamics and Radiation Database (CDRD) for SSMIS and Passive microwave Neural network Precipitation Retrieval (PNPR) for AMSU/MHS, developed at ISAC-CNR within the EUMETSAT H-SAF. We will present error analysis results for the different PMW rainfall retrievals and discuss dependences on precipitation type, elevation and precipitation microphysics (derived from XPOL).
Teaching concepts of clinical measurement variation to medical students.
Hodder, R A; Longfield, J N; Cruess, D F; Horton, J A
1982-09-01
An exercise in clinical epidemiology was developed for medical students to demonstrate the process and limitations of scientific measurement using models that simulate common clinical experiences. All scales of measurement (nominal, ordinal and interval) were used to illustrate concepts of intra- and interobserver variation, systematic error, recording error, and procedural error. In a laboratory, students a) determined blood pressures on six videotaped subjects, b) graded sugar content of unknown solutions from 0 to 4+ using Clinitest tablets, c) measured papules that simulated PPD reactions, d) measured heart and kidney size on X-rays and, e) described a model skin lesion (melanoma). Traditionally, measurement variation is taught in biostatistics or epidemiology courses using previously collected data. Use of these models enables students to produce their own data using measurements commonly employed by the clinician. The exercise provided material for a meaningful discussion of the implications of measurement error in clinical decision-making.
On the assessment of the added value of new predictive biomarkers.
Chen, Weijie; Samuelson, Frank W; Gallas, Brandon D; Kang, Le; Sahiner, Berkman; Petrick, Nicholas
2013-07-29
The surge in biomarker development calls for research on statistical evaluation methodology to rigorously assess emerging biomarkers and classification models. Recently, several authors reported the puzzling observation that, in assessing the added value of new biomarkers to existing ones in a logistic regression model, statistical significance of new predictor variables does not necessarily translate into a statistically significant increase in the area under the ROC curve (AUC). Vickers et al. concluded that this inconsistency is because AUC "has vastly inferior statistical properties," i.e., it is extremely conservative. This statement is based on simulations that misuse the DeLong et al. method. Our purpose is to provide a fair comparison of the likelihood ratio (LR) test and the Wald test versus diagnostic accuracy (AUC) tests. We present a test to compare ideal AUCs of nested linear discriminant functions via an F test. We compare it with the LR test and the Wald test for the logistic regression model. The null hypotheses of these three tests are equivalent; however, the F test is an exact test whereas the LR test and the Wald test are asymptotic tests. Our simulation shows that the F test has the nominal type I error even with a small sample size. Our results also indicate that the LR test and the Wald test have inflated type I errors when the sample size is small, while the type I error converges to the nominal value asymptotically with increasing sample size as expected. We further show that the DeLong et al. method tests a different hypothesis and has the nominal type I error when it is used within its designed scope. Finally, we summarize the pros and cons of all four methods we consider in this paper. We show that there is nothing inherently less powerful or disagreeable about ROC analysis for showing the usefulness of new biomarkers or characterizing the performance of classification models. Each statistical method for assessing biomarkers and classification models has its own strengths and weaknesses. Investigators need to choose methods based on the assessment purpose, the biomarker development phase at which the assessment is being performed, the available patient data, and the validity of assumptions behind the methodologies.
A summary of the Planck constant determinations using the NRC Kibble balance
NASA Astrophysics Data System (ADS)
Wood, B. M.; Sanchez, C. A.; Green, R. G.; Liard, J. O.
2017-06-01
We present a summary of the Planck constant determinations using the NRC watt balance, now referred to as the NRC Kibble balance. The summary includes a reanalysis of the four determinations performed in late 2013, as well as three new determinations performed in 2016. We also present a number of improvements and modifications to the experiment resulting in lower noise and an improved uncertainty analysis. As well, we present a systematic error that had been previously unrecognized and we have quantified its correction. The seven determinations, using three different nominal masses and two different materials, are reanalysed in a manner consistent with that used by the CODATA Task Group on Fundamental Constants (TGFC) and includes a comprehensive assessment of correlations. The result is a Planck constant of 6.626 070 133(60) ×10-34 Js and an inferred value of the Avogadro constant of 6.022 140 772(55) ×1023 mol-1. These fractional uncertainties of less than 10-8 are the smallest published to date.
Optical Fiber Power Meter Comparison Between NIST and NIM.
Vayshenker, I; Livigni, D J; Li, X; Lehman, J H; Li, J; Xiong, L M; Zhang, Z X
2010-01-01
We describe the results of a comparison of reference standards between the National Institute of Standards and Technology (NIST-USA) and National Institute of Metrology (NIM-China). We report optical fiber-based power measurements at nominal wavelengths of 1310 nm and 1550 nm. We compare the laboratories' reference standards by means of a commercial optical power meter. Measurement results showed the largest difference of less than 2.6 parts in 10(3), which is within the combined standard (k = 1) uncertainty for the laboratories' reference standards.
Quantitative determination and classification of energy drinks using near-infrared spectroscopy.
Rácz, Anita; Héberger, Károly; Fodor, Marietta
2016-09-01
Almost a hundred commercially available energy drink samples from Hungary, Slovakia, and Greece were collected for the quantitative determination of their caffeine and sugar content with FT-NIR spectroscopy and high-performance liquid chromatography (HPLC). Calibration models were built with partial least-squares regression (PLSR). An HPLC-UV method was used to measure the reference values for caffeine content, while sugar contents were measured with the Schoorl method. Both the nominal sugar content (as indicated on the cans) and the measured sugar concentration were used as references. Although the Schoorl method has larger error and bias, appropriate models could be developed using both references. The validation of the models was based on sevenfold cross-validation and external validation. FT-NIR analysis is a good candidate to replace the HPLC-UV method, because it is much cheaper than any chromatographic method, while it is also more time-efficient. The combination of FT-NIR with multidimensional chemometric techniques like PLSR can be a good option for the detection of low caffeine concentrations in energy drinks. Moreover, three types of energy drinks that contain (i) taurine, (ii) arginine, and (iii) none of these two components were classified correctly using principal component analysis and linear discriminant analysis. Such classifications are important for the detection of adulterated samples and for quality control, as well. In this case, more than a hundred samples were used for the evaluation. The classification was validated with cross-validation and several randomization tests (X-scrambling). Graphical Abstract The way of energy drinks from cans to appropriate chemometric models.
Orbital Signature Analyzer (OSA): A spacecraft health/safety monitoring and analysis tool
NASA Technical Reports Server (NTRS)
Weaver, Steven; Degeorges, Charles; Bush, Joy; Shendock, Robert; Mandl, Daniel
1993-01-01
Fixed or static limit sensing is employed in control centers to ensure that spacecraft parameters remain within a nominal range. However, many critical parameters, such as power system telemetry, are time-varying and, as such, their 'nominal' range is necessarily time-varying as well. Predicted data, manual limits checking, and widened limit-checking ranges are often employed in an attempt to monitor these parameters without generating excessive limits violations. Generating predicted data and manual limits checking are both resource intensive, while broadening limit ranges for time-varying parameters is clearly inadequate to detect all but catastrophic problems. OSA provides a low-cost solution by using analytically selected data as a reference upon which to base its limits. These limits are always defined relative to the time-varying reference data, rather than as fixed upper and lower limits. In effect, OSA provides individual limits tailored to each value throughout all the data. A side benefit of using relative limits is that they automatically adjust to new reference data. In addition, OSA provides a wealth of analytical by-products in its execution.
2006-03-01
included zero, there is insufficient evidence to indicate that the error mean is 35 not zero. The Breusch - Pagan test was used to test the constant...Multicollinearity .............................................................................. 33 Testing OLS Assumptions...programming styles used by developers (Stamelos and others, 2003:733). Kemerer tested to see how models utilizing SLOC as an independent variable
Lessons learned from the AIRS pre-flight radiometric calibration
NASA Astrophysics Data System (ADS)
Pagano, Thomas S.; Aumann, Hartmut H.; Weiler, Margie
2013-09-01
The Atmospheric Infrared Sounder (AIRS) instrument flies on the NASA Aqua satellite and measures the upwelling hyperspectral earth radiance in the spectral range of 3.7-15.4 μm with a nominal ground resolution at nadir of 13.5 km. The AIRS spectra are achieved using a temperature controlled grating spectrometer and HgCdTe infrared linear arrays providing 2378 channels with a nominal spectral resolution of approximately 1200. The AIRS pre-flight tests that impact the radiometric calibration include a full system radiometric response (linearity), polarization response, and response vs scan angle (RVS). We re-derive the AIRS instrument radiometric calibration coefficients from the pre-flight polarization measurements, the response vs scan (RVS) angle tests as well as the linearity tests, and a recent lunar roll test that allowed the AIRS to view the moon. The data and method for deriving the coefficients is discussed in detail and the resulting values compared amongst the different tests. Finally, we examine the residual errors in the reconstruction of the external calibrator blackbody radiances and the efficacy of a new radiometric uncertainty model. Results show the radiometric calibration of AIRS to be excellent and the radiometric uncertainty model does a reasonable job of characterizing the errors.
Correcting Too Much or Too Little? The Performance of Three Chi-Square Corrections.
Foldnes, Njål; Olsson, Ulf Henning
2015-01-01
This simulation study investigates the performance of three test statistics, T1, T2, and T3, used to evaluate structural equation model fit under non normal data conditions. T1 is the well-known mean-adjusted statistic of Satorra and Bentler. T2 is the mean-and-variance adjusted statistic of Sattertwaithe type where the degrees of freedom is manipulated. T3 is a recently proposed version of T2 that does not manipulate degrees of freedom. Discrepancies between these statistics and their nominal chi-square distribution in terms of errors of Type I and Type II are investigated. All statistics are shown to be sensitive to increasing kurtosis in the data, with Type I error rates often far off the nominal level. Under excess kurtosis true models are generally over-rejected by T1 and under-rejected by T2 and T3, which have similar performance in all conditions. Under misspecification there is a loss of power with increasing kurtosis, especially for T2 and T3. The coefficient of variation of the nonzero eigenvalues of a certain matrix is shown to be a reliable indicator for the adequacy of these statistics.
Children's comprehension skill and the understanding of nominal metaphors.
Seigneuric, Alix; Megherbi, Hakima; Bueno, Steve; Lebahar, Julie; Bianco, Maryse
2016-10-01
According to Levorato and Cacciari's global elaboration model, understanding figurative language is explained by the same processes and background knowledge that are required for literal language. In this study, we investigated the relation between children's comprehension skill and the ability to understand referential nominal metaphors. Two groups of poor versus good comprehenders (8- to 10-year-olds) matched for word reading and vocabulary skills were invited to identify the referent of nouns used metaphorically or literally in short texts. Compared with good comprehenders, performance of poor comprehenders showed a substantial decrease in the metaphoric condition. Moreover, their performance was strongly affected by the degree of semantic incongruence between the terms of the nominal metaphor. These findings are discussed in relation to several factors, in particular the ability to use contextual information and semantic processing. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1972-01-01
The Reference Design Document, of the Preliminary Safety Analysis Report (PSAR) - Reactor System provides the basic design and operations data used in the nuclear safety analysis of the Rector Power Module as applied to a Space Base program. A description of the power module systems, facilities, launch vehicle and mission operations, as defined in NASA Phase A Space Base studies is included. Each of two Zirconium Hydride Reactor Brayton power modules provides 50 kWe for the nominal 50 man Space Base. The INT-21 is the prime launch vehicle. Resupply to the 500 km orbit over the ten year mission is provided by the Space Shuttle. At the end of the power module lifetime (nominally five years), a reactor disposal system is deployed for boost into a 990 km high altitude (long decay time) earth orbit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mouradian, E. M.
1983-12-31
Thermal analyses for the preliminary design phase of the Receiver of the Carrizo Plains Solar Power Plant are presented. The sodium reference operating conditions (T/sub in/ = 610/sup 0/F, T/sub out/ = 1050/sup 0/F) have been considered. Included are: Nominal flux distribution on receiver panal, Energy input to tubes, Axial temperature distribution; sodium and tubes, Sodium flow distribution, Sodium pressure drop, orifice calculations, Temperature distribution in tube cut (R-0), Backface structure, and Nonuniform sodium outlet temperature. Transient conditions and panel front face heat losses are not considered. These are to be addressed in a subsequent design phase. Also to bemore » considered later are the design conditions as variations from the nominal reference (operating) condition. An addendum, designated Appendix C, has been included describing panel heat losses, panel temperature distribution, and tube-manifold joint thermal model.« less
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montero, A Barragan; Sterpin, E; Lee, J
Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less
A preliminary estimate of geoid-induced variations in repeat orbit satellite altimeter observations
NASA Technical Reports Server (NTRS)
Brenner, Anita C.; Beckley, B. D.; Koblinsky, C. J.
1990-01-01
Altimeter satellites are often maintained in a repeating orbit to facilitate the separation of sea-height variations from the geoid. However, atmospheric drag and solar radiation pressure cause a satellite orbit to drift. For Geosat this drift causes the ground track to vary by + or - 1 km about the nominal repeat path. This misalignment leads to an error in the estimates of sea surface height variations because of the local slope in the geoid. This error has been estimated globally for the Geosat Exact Repeat Mission using a mean sea surface constructed from Geos 3 and Seasat altimeter data. Over most of the ocean the geoid gradient is small, and the repeat-track misalignment leads to errors of only 1 to 2 cm. However, in the vicinity of trenches, continental shelves, islands, and seamounts, errors can exceed 20 cm. The estimated error is compared with direct estimates from Geosat altimetry, and a strong correlation is found in the vicinity of the Tonga and Aleutian trenches. This correlation increases as the orbit error is reduced because of the increased signal-to-noise ratio.
Design of a Model Reference Adaptive Controller for an Unmanned Air Vehicle
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.
2010-01-01
This paper presents the "Adaptive Control Technology for Safe Flight (ACTS)" architecture, which consists of a non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off nominal ones. The design and implementation procedures of both controllers are presented. The aim of these procedures, which encompass both theoretical and practical considerations, is to develop a controller suitable for flight. The ACTS architecture is applied to the Generic Transport Model developed by NASA-Langley Research Center. The GTM is a dynamically scaled test model of a transport aircraft for which a flight-test article and a high-fidelity simulation are available. The nominal controller at the core of the ACTS architecture has a multivariable LQR-PI structure while the adaptive one has a direct, model reference structure. The main control surfaces as well as the throttles are used as control inputs. The inclusion of the latter alleviates the pilot s workload by eliminating the need for cancelling the pitch coupling generated by changes in thrust. Furthermore, the independent usage of the throttles by the adaptive controller enables their use for attitude control. Advantages and potential drawbacks of adaptation are demonstrated by performing high fidelity simulations of a flight-validated controller and of its adaptive augmentation.
Intercontinental height datum connection with GOCE and GPS-levelling data
NASA Astrophysics Data System (ADS)
Gruber, T.; Gerlach, C.; Haagmans, R.
2012-12-01
In this study an attempt is made to establish height system datum connections based upon a gravity field and steady-state ocean circulation explorer (GOCE) gravity field model and a set of global positioning system (GPS) and levelling data. The procedure applied in principle is straightforward. First local geoid heights are obtained point wise from GPS and levelling data. Then the mean of these geoid heights is computed for regions nominally referring to the same height datum. Subsequently, these local mean geoid heights are compared with a mean global geoid from GOCE for the same region. This way one can identify an offset of the local to the global geoid per region. This procedure is applied to a number of regions distributed worldwide. Results show that the vertical datum offset estimates strongly depend on the nature of the omission error, i.e. the signal not represented in the GOCE model. For a smooth gravity field the commission error of GOCE, the quality of the GPS and levelling data and the averaging control the accuracy of the vertical datum offset estimates. In case the omission error does not cancel out in the mean value computation, because of a sub-optimal point distribution or a characteristic behaviour of the omitted part of the geoid signal, one needs to estimate a correction for the omission error from other sources. For areas with dense and high quality ground observations the EGM2008 global model is a good choice to estimate the omission error correction in theses cases. Relative intercontinental height datum offsets are estimated by applying this procedure between the United State of America (USA), Australia and Germany. These are compared to historical values provided in the literature and computed with the same procedure. The results obtained in this study agree on a level of 10 cm to the historical results. The changes mainly can be attributed to the new global geoid information from GOCE, rather than to the ellipsoidal heights or the levelled heights. These historical levelling data are still in use in many countries. This conclusion is supported by other results on the validation of the GOCE models.
Using Dispersed Modes During Model Correlation
NASA Technical Reports Server (NTRS)
Stewart, Eric C.; Hathcock, Megan L.
2017-01-01
The model correlation process for the modal characteristics of a launch vehicle is well established. After a test, parameters within the nominal model are adjusted to reflect structural dynamics revealed during testing. However, a full model correlation process for a complex structure can take months of man-hours and many computational resources. If the analyst only has weeks, or even days, of time in which to correlate the nominal model to the experimental results, then the traditional correlation process is not suitable. This paper describes using model dispersions to assist the model correlation process and decrease the overall cost of the process. The process creates thousands of model dispersions from the nominal model prior to the test and then compares each of them to the test data. Using mode shape and frequency error metrics, one dispersion is selected as the best match to the test data. This dispersion is further improved by using a commercial model correlation software. In the three examples shown in this paper, this dispersion based model correlation process performs well when compared to models correlated using traditional techniques and saves time in the post-test analysis.
Guidance and Control strategies for aerospace vehicles
NASA Technical Reports Server (NTRS)
Hibey, J. L.; Naidu, D. S.; Charalambous, C. D.
1989-01-01
A neighboring optimal guidance scheme was devised for a nonlinear dynamic system with stochastic inputs and perfect measurements as applicable to fuel optimal control of an aeroassisted orbital transfer vehicle. For the deterministic nonlinear dynamic system describing the atmospheric maneuver, a nominal trajectory was determined. Then, a neighboring, optimal guidance scheme was obtained for open loop and closed loop control configurations. Taking modelling uncertainties into account, a linear, stochastic, neighboring optimal guidance scheme was devised. Finally, the optimal trajectory was approximated as the sum of the deterministic nominal trajectory and the stochastic neighboring optimal solution. Numerical results are presented for a typical vehicle. A fuel-optimal control problem in aeroassisted noncoplanar orbital transfer is also addressed. The equations of motion for the atmospheric maneuver are nonlinear and the optimal (nominal) trajectory and control are obtained. In order to follow the nominal trajectory under actual conditions, a neighboring optimum guidance scheme is designed using linear quadratic regulator theory for onboard real-time implementation. One of the state variables is used as the independent variable in reference to the time. The weighting matrices in the performance index are chosen by a combination of a heuristic method and an optimal modal approach. The necessary feedback control law is obtained in order to minimize the deviations from the nominal conditions.
DOT National Transportation Integrated Search
2001-01-22
Federal Aviation Regulation (FAR) Part 36, Noise : Standards: Aircraft Type and Airworthiness : Certification, requires that measured aircraft noise : certification data be corrected to a nominal reference-day : condition. This correction process...
Trajectory Design to Mitigate Risk on the Transiting Exoplanet Survey Satellite (TESS) Mission
NASA Technical Reports Server (NTRS)
Dichmann, Donald
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will employ a highly eccentric Earth orbit, in 2:1 lunar resonance, reached with a lunar flyby preceded by 3.5 phasing loops. The TESS mission has limited propellant and several orbit constraints. Based on analysis and simulation, we have designed the phasing loops to reduce delta-V and to mitigate risk due to maneuver execution errors. We have automated the trajectory design process and use distributed processing to generate and to optimize nominal trajectories, check constraint satisfaction, and finally model the effects of maneuver errors to identify trajectories that best meet the mission requirements.
Anselmi, Nicola; Salucci, Marco; Rocca, Paolo; Massa, Andrea
2016-01-01
The sensitivity to both calibration errors and mutual coupling effects of the power pattern radiated by a linear array is addressed. Starting from the knowledge of the nominal excitations of the array elements and the maximum uncertainty on their amplitudes, the bounds of the pattern deviations from the ideal one are analytically derived by exploiting the Circular Interval Analysis (CIA). A set of representative numerical results is reported and discussed to assess the effectiveness and the reliability of the proposed approach also in comparison with state-of-the-art methods and full-wave simulations. PMID:27258274
Radiometric Spacecraft Tracking for Deep Space Navigation
NASA Technical Reports Server (NTRS)
Lanyi, Gabor E.; Border, James S.; Shin, Dong K.
2008-01-01
Interplanetary spacecraft navigation relies on three types of terrestrial tracking observables.1) Ranging measures the distance between the observing site and the probe. 2) The line-of-sight velocity of the probe is inferred from Doppler-shift by measuring the frequency shift of the received signal with respect to the unshifted frequency. 3) Differential angular coordinates of the probe with respect to natural radio sources are nominally obtained via a differential delay technique of (Delta) DOR (Delta Differential One-way Ranging). The accuracy of spacecraft coordinate determination depends on the measurement uncertainties associated with each of these three techniques. We evaluate the corresponding sources of error and present a detailed error budget.
Aeroassisted orbit transfer vehicle trajectory analysis
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Suit, William T.
1988-01-01
The emphasis in this study was on the use of multiple pass trajectories for aerobraking. However, for comparison, single pass trajectories, trajectories using ballutes, and trajectories corrupted by atmospheric anomolies were run. A two-pass trajectory was chosen to determine the relation between sensitivity to errors and payload to orbit. Trajectories that used only aerodynamic forces for maneuvering could put more weight into the target orbits but were very sensitive to variations from the planned trajectors. Using some thrust control resulted in less payload to orbit, but greatly reduced the sensitivity to variations from nominal trajectories. When compared to the non-thrusting trajectories investigated, the judicious use of thrusting resulted in multiple pass trajectories that gave 97 percent of the payload to orbit with almost none of the sensitivity to variations from the nominal.
The influence of student characteristics on the dependability of behavioral observation data.
Briesch, Amy M; Volpe, Robert J; Ferguson, Tyler David
2014-06-01
Although generalizability theory has been used increasingly in recent years to investigate the dependability of behavioral estimates, many of these studies have relied on use of general education populations as opposed to those students who are most likely to be referred for assessment due to problematic classroom behavior (e.g., inattention, disruption). The current study investigated the degree to which differences exist in terms of the magnitude of both variance component estimates and dependability coefficients between students nominated by their teachers for Tier 2 interventions due to classroom behavior problems and a general classroom sample (i.e., including both nominated and non-nominated students). The academic engagement levels of 16 (8 nominated, 8 non-nominated) middle school students were measured by 4 trained observers using momentary time-sampling procedures. A series of G and D studies were then conducted to determine whether the 2 groups were similar in terms of the (a) distribution of rating variance and (b) number of observations needed to achieve an adequate level of dependability. Results suggested that the behavior of students in the teacher-nominated group fluctuated more across time and that roughly twice as many observations would therefore be required to yield similar levels of dependability compared with the combined group. These findings highlight the importance of constructing samples of students that are comparable to those students with whom the measurement method is likely to be applied when conducting psychometric investigations of behavioral assessment tools. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.
Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris
2010-07-15
The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net
ERIC Educational Resources Information Center
Polio, Charlene
1995-01-01
Examined how speakers of languages with zero pronouns (Japanese) and without them (English) use zero pronouns when acquiring a second language (L2) that has them (Mandarin Chinese). The findings show that L2 learners do not use zero pronouns as often as native speakers and that their use increases with proficiency. (51 references) (MDM)
Metonymy and reference-point errors in novice programming
NASA Astrophysics Data System (ADS)
Miller, Craig S.
2014-07-01
When learning to program, students often mistakenly refer to an element that is structurally related to the element that they intend to reference. For example, they may indicate the attribute of an object when their intention is to reference the whole object. This paper examines these reference-point errors through the context of metonymy. Metonymy is a rhetorical device where the speaker states a referent that is structurally related to the intended referent. For example, the following sentence states an office bureau but actually refers to a person working at the bureau: The tourist asked the travel bureau for directions to the museum. Drawing upon previous studies, I discuss how student reference errors may be consistent with the use of metonymy. In particular, I hypothesize that students are more likely to reference an identifying element even when a structurally related element is intended. I then present two experiments, which produce results consistent with this analysis. In both experiments, students are more likely to produce reference-point errors that involve identifying attributes than descriptive attributes. Given these results, I explore the possibility that students are relying on habits of communication rather than the mechanistic principles needed for successful programming. Finally I discuss teaching interventions using live examples and how metonymy may be presented to non-computing students as pedagogy for computational thinking.
Gamma heating in reflector heat shield of gas core reactor
NASA Technical Reports Server (NTRS)
Lofthouse, J. H.; Kunze, J. F.; Young, T. E.; Young, R. C.
1972-01-01
Heating rate measurements made in a mock-up of a BeO heat shield for a gas core nuclear rocket engine yields results nominally a factor of two greater than calculated by two different methods. The disparity is thought to be caused by errors in neutron capture cross sections and gamma spectra from the low cross-section elements, D, O, and Be.
Autonomous frequency domain identification: Theory and experiment
NASA Technical Reports Server (NTRS)
Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.
1989-01-01
The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.
Modified fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1992-01-01
A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.
Reference Accuracy among Research Articles Published in "Research on Social Work Practice"
ERIC Educational Resources Information Center
Wilks, Scott E.; Geiger, Jennifer R.; Bates, Samantha M.; Wright, Amy L.
2017-01-01
Objective: The objective was to examine reference errors in research articles published in Research on Social Work Practice. High rates of reference errors in other top social work journals have been noted in previous studies. Methods: Via a sampling frame of 22,177 total references among 464 research articles published in the previous decade, a…
Tfelt-Hansen, Peer
2015-03-01
There are two types of errors when references are used in the scientific literature: citation errors and quotation errors, and these errors have in reviews mainly been evaluated quantitatively. Quotation errors are the major problem, and 1 review reported 6% major quotation errors. The objective of this listing of quotation errors is to illustrate by qualitative analysis of different types of 10 major quotation errors how and possibly why authors misquote references. The author selected for review the first 10 different consecutive major quotation errors encountered from his reading of the headache literature. The characteristics of the 10 quotation errors ranged considerably. Thus, in a review of migraine therapy in a very prestigious medical journal, the superiority of a new treatment (sumatriptan) vs an old treatment (aspirin plus metoclopramide) was claimed despite no significant difference for the primary efficacy measure in the trial. One author, in a scientific debate, referred to the lack of dilation of the middle meningeal artery in spontaneous migraine despite the fact that only 1 migraine attack was studied. The possibility for creative major quotation errors in the medical literature is most likely infinite. Qualitative evaluations, as the present, of major quotation errors will hopefully result in more general awareness of quotation problems in the medical literature. Even if the final responsibility for correct use of quotations is with the authors, the referees, the experts with the knowledge needed to spot quotation errors, should be more involved in ensuring correct and fair use of references. Finally, this paper suggests that major misleading quotations, if pointed out by readers of the journal, should, as a rule, be corrected by way of an erratum statement. © 2015 American Headache Society.
Optical-Fiber Power Meter Comparison Between NIST and PTB.
Vayshenker, I; Haars, H; Li, X; Lehman, J H; Livigni, D J
2003-01-01
We describe the results of a comparison of reference standards between the National Institute of Standards and Technology (NIST-USA) and Physikalisch-Technische Bundesanstalt (PTB-Germany) at nominal wavelengths of 1300 nm and 1550 nm using an optical-fiber cable. Both laboratories used thermal detectors as reference standards. A novel temperature-controlled, optical-trap detector was used as a transfer standard to compare two reference standards. Measurement results showed differences of less than 1.5 × 10(-3), which is within the combined uncertainty for both laboratories.
Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.
2018-01-01
Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737
Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D
2018-01-01
Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.
Cecconi, Maurizio; Rhodes, Andrew; Poloniecki, Jan; Della Rocca, Giorgio; Grounds, R Michael
2009-01-01
Bland-Altman analysis is used for assessing agreement between two measurements of the same clinical variable. In the field of cardiac output monitoring, its results, in terms of bias and limits of agreement, are often difficult to interpret, leading clinicians to use a cutoff of 30% in the percentage error in order to decide whether a new technique may be considered a good alternative. This percentage error of +/- 30% arises from the assumption that the commonly used reference technique, intermittent thermodilution, has a precision of +/- 20% or less. The combination of two precisions of +/- 20% equates to a total error of +/- 28.3%, which is commonly rounded up to +/- 30%. Thus, finding a percentage error of less than +/- 30% should equate to the new tested technique having an error similar to the reference, which therefore should be acceptable. In a worked example in this paper, we discuss the limitations of this approach, in particular in regard to the situation in which the reference technique may be either more or less precise than would normally be expected. This can lead to inappropriate conclusions being drawn from data acquired in validation studies of new monitoring technologies. We conclude that it is not acceptable to present comparison studies quoting percentage error as an acceptability criteria without reporting the precision of the reference technique.
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Model error in covariance structure models: Some implications for power and Type I error
Coffman, Donna L.
2010-01-01
The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
Sensitivity of planetary cruise navigation to earth orientation calibration errors
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Folkner, W. M.
1995-01-01
A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.
Rankin, Richard; Kotter, Dale
1994-01-01
An optical voltage reference for providing an alternative to a battery source. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function.
Improved ultrasonic standard reference blocks
NASA Technical Reports Server (NTRS)
Eitzen, D. G.
1975-01-01
A program to improve the quality, reproducibility and reliability of nondestructive testing through the development of improved ASTM-type ultrasonic reference standards is described. Reference blocks of aluminum, steel, and titanium alloys were considered. Equipment representing the state-of-the-art in laboratory and field ultrasonic equipment was obtained and evaluated. Some RF and spectral data on ten sets of ultrasonic reference blocks were taken as part of a task to quantify the variability in response from nominally identical blocks. Techniques for residual stress, preferred orientation, and microstructural measurements were refined and are applied to a reference block rejected by the manufacturer during fabrication in order to evaluate the effect of metallurgical condition on block response.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-02
... recommendations for a standardized reference data depository representing the universe of legal and financial... individual transactions, underlying legal documents (including master agreements and credit support... agencies, industry, exchanges, academia, information technology, information systems, and groups...
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
NASA Technical Reports Server (NTRS)
Pujar, Vijay V.; Cawley, James D.; Levine, S. (Technical Monitor)
2000-01-01
Earlier results from computer simulation studies suggest a correlation between the spatial distribution of stacking errors in the Beta-SiC structure and features observed in X-ray diffraction patterns of the material. Reported here are experimental results obtained from two types of nominally Beta-SiC specimens, which yield distinct XRD data. These samples were analyzed using high resolution transmission electron microscopy (HRTEM) and the stacking error distribution was directly determined. The HRTEM results compare well to those deduced by matching the XRD data with simulated spectra, confirming the hypothesis that the XRD data is indicative not only of the presence and density of stacking errors, but also that it can yield information regarding their distribution. In addition, the stacking error population in both specimens is related to their synthesis conditions and it appears that it is similar to the relation developed by others to explain the formation of the corresponding polytypes.
Accounting for Relatedness in Family Based Genetic Association Studies
McArdle, P.F.; O’Connell, J.R.; Pollin, T.I.; Baumgarten, M.; Shuldiner, A.R.; Peyser, P.A.; Mitchell, B.D.
2007-01-01
Objective Assess the differences in point estimates, power and type 1 error rates when accounting for and ignoring family structure in genetic tests of association. Methods We compare by simulation the performance of analytic models using variance components to account for family structure and regression models that ignore relatedness for a range of possible family based study designs (i.e., sib pairs vs. large sibships vs. nuclear families vs. extended families). Results Our analyses indicate that effect size estimates and power are not significantly affected by ignoring family structure. Type 1 error rates increase when family structure is ignored, as density of family structures increases, and as trait heritability increases. For discrete traits with moderate levels of heritability and across many common sampling designs, type 1 error rates rise from a nominal 0.05 to 0.11. Conclusion Ignoring family structure may be useful in screening although it comes at a cost of a increased type 1 error rate, the magnitude of which depends on trait heritability and pedigree configuration. PMID:17570925
Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U
2017-12-01
We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.
The Extended HANDS Characterization and Analysis of Metric Biases
NASA Astrophysics Data System (ADS)
Kelecy, T.; Knox, R.; Cognion, R.
The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.
Digital Holography for in Situ Real-Time Measurement of Plasma-Facing-Component Erosion
DOE Office of Scientific and Technical Information (OSTI.GOV)
ThomasJr., C. E.; Granstedt, E. M.; Biewer, Theodore M
2014-01-01
In situ, real time measurement of net plasma-facing-component (PFC) erosion/deposition in a real plasma device is challenging due to the need for good spatial and temporal resolution, sufficient sensitivity, and immunity to fringe-jump errors. Design of a high-sensitivity, potentially high-speed, dual-wavelength CO2 laser digital holography system (nominally immune to fringe jumps) for PFC erosion measurement is discussed.
Keene, Keith L.; Mychaleckyj, Josyf C.; Smith, Shelly G.; Leak, Tennille S.; Perlegas, Peter S.; Langefeld, Carl D.; Herrington, David M.; Freedman, Barry I.; Rich, Stephen S.; Bowden, Donald W.; Sale, Michèle M.
2009-01-01
We previously investigated the estrogen receptor α gene (ESR1) as a positional candidate for type 2 diabetes (T2DM), and found evidence for association between the intron 1-intron 2 region of this gene and type 2 diabetes and/or nephropathy in an African American (AA) population. Our objective was to comprehensively evaluate variants across the entire ESR1 gene for association in AA with T2DM and End Stage Renal Disease (T2DM-ESRD). One hundred fifty SNPs in ESR1, spanning 476 kb, were genotyped in 577 AA individuals with T2DM-ESRD and 596 AA controls. Genotypic association tests for dominant, additive, and recessive models, and haplotypic association, were calculated using a χ2 statistic and corresponding P value. Thirty-one SNPs showed nominal evidence for association (P< 0.05) with T2DM-ESRD in one or more genotypic model. After correcting for multiple tests, promoter SNP rs11964281 (nominal P=0.000291, adjusted P=0.0289), and intron 4 SNPs rs1569788 (nominal P=0.000754, adjusted P=0.0278) and rs9340969 (nominal P=0.00109, adjusted P=0.0467) remained significant at experimentwise error rate (EER) P<0.05 for the dominant class of tests. Twenty-three of the thirty-one associated SNPs cluster within the intron 4-intron 6 region. Gender stratification revealed nominal evidence for association with 35 SNPs in females (352 cases; 306 controls) and seven SNPs in males (225 cases; 290 controls). We have identified a novel region of the ESR1 gene that may contain important functional polymorphisms in relation to susceptibility to T2DM and/or diabetic nephropathy. PMID:18305958
Keene, Keith L; Mychaleckyj, Josyf C; Smith, Shelly G; Leak, Tennille S; Perlegas, Peter S; Langefeld, Carl D; Herrington, David M; Freedman, Barry I; Rich, Stephen S; Bowden, Donald W; Sale, Michèle M
2008-05-01
We previously investigated the estrogen receptor alpha gene (ESR1) as a positional candidate for type 2 diabetes (T2DM), and found evidence for association between the intron 1-intron 2 region of this gene and T2DM and/or nephropathy in an African American (AA) population. Our objective was to comprehensively evaluate variants across the entire ESR1 gene for association in AA with T2DM and end stage renal disease (T2DM-ESRD). One hundred fifty SNPs in ESR1, spanning 476 kb, were genotyped in 577 AA individuals with T2DM-ESRD and 596 AA controls. Genotypic association tests for dominant, additive, and recessive models, and haplotypic association, were calculated using a chi(2) statistic and corresponding P value. Thirty-one SNPs showed nominal evidence for association (P < 0.05) with T2DM-ESRD in one or more genotypic model. After correcting for multiple tests, promoter SNP rs11964281 (nominal P = 0.000291, adjusted P = 0.0289), and intron 4 SNPs rs1569788 (nominal P = 0.000754, adjusted P = 0.0278) and rs9340969 (nominal P = 0.00109, adjusted P = 0.0467) remained significant at experimentwise error rate (EER) P = 0.05 for the dominant class of tests. Twenty-three of the thirty-one associated SNPs cluster within the intron 4-intron 6 regions. Gender stratification revealed nominal evidence for association with 35 SNPs in females (352 cases; 306 controls) and seven SNPs in males (225 cases; 290 controls). We have identified a novel region of the ESR1 gene that may contain important functional polymorphisms in relation to susceptibility to T2DM and/or diabetic nephropathy.
Aircraft system modeling error and control error
NASA Technical Reports Server (NTRS)
Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)
2012-01-01
A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.
Trajectory Design Enhancements to Mitigate Risk for the Transiting Exoplanet Survey Satellite (TESS)
NASA Technical Reports Server (NTRS)
Dichmann, Donald; Parker, Joel; Nickel, Craig; Lutz, Stephen
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will employ a highly eccentric Earth orbit, in 2:1 lunar resonance, which will be reached with a lunar flyby preceded by 3.5 phasing loops. The TESS mission has limited propellant and several constraints on the science orbit and on the phasing loops. Based on analysis and simulation, we have designed the phasing loops to reduce delta-V (DV) and to mitigate risk due to maneuver execution errors. We have automated the trajectory design process and use distributed processing to generate and optimal nominal trajectories; to check constraint satisfaction; and finally to model the effects of maneuver errors to identify trajectories that best meet the mission requirements.
Inverting Image Data For Optical Testing And Alignment
NASA Technical Reports Server (NTRS)
Shao, Michael; Redding, David; Yu, Jeffrey W.; Dumont, Philip J.
1993-01-01
Data from images produced by slightly incorrectly figured concave primary mirror in telescope processed into estimate of spherical aberration of mirror, by use of algorithm finding nonlinear least-squares best fit between actual images and synthetic images produced by multiparameter mathematical model of telescope optical system. Estimated spherical aberration, in turn, converted into estimate of deviation of reflector surface from nominal precise shape. Algorithm devised as part of effort to determine error in surface figure of primary mirror of Hubble space telescope, so corrective lens designed. Modified versions of algorithm also used to find optical errors in other components of telescope or of other optical systems, for purposes of testing, alignment, and/or correction.
Rankin, R.; Kotter, D.
1994-04-26
An optical voltage reference for providing an alternative to a battery source is described. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function. 2 figures.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
ENDOCRINE DISRUPTORS FROM COMBUSTION AND VEHICULAR EMISSIONS: IDENTIFICATION AND SOURCE NOMINATION
During the last decade, concerns have been raised regarding the possible harmful effects of exposure to certain chemicals that are capable of modulating or disrupting the function of the endocrine system. These chemicals, which are referred to as endocrine disrupting chemicals (E...
Economic impact of fuel properties on turbine powered business aircraft
NASA Technical Reports Server (NTRS)
Powell, F. D.
1984-01-01
The principal objective was to estimate the economic impact on the turbine-powered business aviation fleet of potential changes in the composition and properties of aviation fuel. Secondary objectives include estimation of the sensitivity of costs to specific fuel properties, and an assessment of the directions in which further research should be directed. The study was based on the published characteristics of typical and specific modern aircraft in three classes; heavy jet, light jet, and turboprop. Missions of these aircraft were simulated by computer methods for each aircraft for several range and payload combinations, and assumed atmospheric temperatures ranging from nominal to extremely cold. Five fuels were selected for comparison with the reference fuel, nominal Jet A. An overview of the data, the mathematic models, the data reduction and analysis procedure, and the results of the study are given. The direct operating costs of the study fuels are compared with that of the reference fuel in the 1990 time-frame, and the anticipated fleet costs and fuel break-even costs are estimated.
Murphy, K E; Beary, E S; Rearick, M S; Vocke, R D
2000-10-01
Lead (Pb) and cadmium (Cd) have been determined in six new environmental standard reference materials (SRMs) using isotope dilution inductively coupled plasma mass spectrometry (ID ICP-MS). The SRMs are the following: SRM 1944, New York-New Jersey Waterway Sediment, SRMs 2583 and 2584, Trace Elements in Indoor Dust, Nominal 90 mg/kg and 10,000 mg/kg Lead, respectively, SRMs 2586 and 2587, Trace Elements in Soil Containing Lead from Paint, Nominal 500 mg/kg and 3,000 mg/kg Lead, respectively, and SRM 2782, Industrial Sludge. The capabilities of ID ICP-MS for the certification of Pb and Cd in these materials are assessed. Sample preparation and ratio measurement uncertainties have been evaluated. Reproducibility and accuracy of the established procedures are demonstrated by determination of gravimetrically prepared primary standard solutions and by comparison with isotope dilution thermal ionization mass spectrometry (ID TIMS). Material heterogeneity was readily demonstrated to be the dominant source of uncertainty in the certified values.
NASA Technical Reports Server (NTRS)
Binkley, David M.; Verma, Nikhil; Crawford, Robert L.; Brandon, Erik; Jackson, Thomas N.
2004-01-01
Organic strain gauge and other sensors require high-gain, precision dc amplification to process their low-level output signals. Ideally, amplifiers would be fabricated using organic thin-film field-effect transistors (OTFT's) adjacent to the sensors. However, OTFT amplifiers exhibit low gain and high input-referred dc offsets that must be effectively managed. This paper presents a four-stage, cascaded differential OTFT amplifier utilizing switched capacitor auto-zeroing. Each stage provides a nominal voltage gain of four through a differential pair driving low-impedance active loads, which provide common-mode output voltage control. p-type pentacence OTFT's are used for the amplifier devices and auto-zero switches. Simulations indicate the amplifier provides a nominal voltage gain of 280 V/V and effectively amplifies a 1-mV dc signal in the presence of 500-mV amplifier input-referred dc offset voltages. Future work could include the addition of digital gain calibration and offset correction of residual offsets associated with charge injection imbalance in the differential circuits.
Raw data normalization for a multi source inverse geometry CT system
Baek, Jongduk; De Man, Bruno; Harrison, Daniel; Pelc, Norbert J.
2015-01-01
A multi-source inverse-geometry CT (MS-IGCT) system consists of a small 2D detector array and multiple x-ray sources. During data acquisition, each source is activated sequentially, and may have random source intensity fluctuations relative to their respective nominal intensity. While a conventional 3rd generation CT system uses a reference channel to monitor the source intensity fluctuation, the MS-IGCT system source illuminates a small portion of the entire field-of-view (FOV). Therefore, it is difficult for all sources to illuminate the reference channel and the projection data computed by standard normalization using flat field data of each source contains error and can cause significant artifacts. In this work, we present a raw data normalization algorithm to reduce the image artifacts caused by source intensity fluctuation. The proposed method was tested using computer simulations with a uniform water phantom and a Shepp-Logan phantom, and experimental data of an ice-filled PMMA phantom and a rabbit. The effect on image resolution and robustness of the noise were tested using MTF and standard deviation of the reconstructed noise image. With the intensity fluctuation and no correction, reconstructed images from simulation and experimental data show high frequency artifacts and ring artifacts which are removed effectively using the proposed method. It is also observed that the proposed method does not degrade the image resolution and is very robust to the presence of noise. PMID:25837090
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.
1986-01-01
The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.
Trajectory specification for high capacity air traffic control
NASA Technical Reports Server (NTRS)
Paielli, Russell A. (Inventor)
2010-01-01
Method and system for analyzing and processing information on one or more aircraft flight paths, using a four-dimensional coordinate system including three Cartesian or equivalent coordinates (x, y, z) and a fourth coordinate .delta. that corresponds to a distance estimated along a reference flight path to a nearest reference path location corresponding to a present location of the aircraft. Use of the coordinate .delta., rather than elapsed time t, avoids coupling of along-track error into aircraft altitude and reduces effects of errors on an aircraft landing site. Along-track, cross-track and/or altitude errors are estimated and compared with a permitted error bounding space surrounding the reference flight path.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-12
... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0253] Proposed Information Collection.... ADDRESSES: Submit written comments on the collection of information through Federal Docket Management System....gov . Please refer to ``OMB Control No. 2900-0253'' in any correspondence. During the comment period...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Governments. 598.505 Section 598....505 Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part applies to all such governments. ...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 1 2013-01-01 2013-01-01 false Governments. 25.501 Section 25.501 Agriculture Office....501 Governments. If more than one State or local government seeks to nominate an area under this part, any reference to or requirement of this part shall apply to all such governments. ...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 1 2012-01-01 2012-01-01 false Governments. 25.501 Section 25.501 Agriculture Office....501 Governments. If more than one State or local government seeks to nominate an area under this part, any reference to or requirement of this part shall apply to all such governments. ...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 1 2010-01-01 2010-01-01 false Governments. 25.501 Section 25.501 Agriculture Office....501 Governments. If more than one State or local government seeks to nominate an area under this part, any reference to or requirement of this part shall apply to all such governments. ...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 1 2014-01-01 2014-01-01 false Governments. 25.501 Section 25.501 Agriculture Office....501 Governments. If more than one State or local government seeks to nominate an area under this part, any reference to or requirement of this part shall apply to all such governments. ...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Governments. 598.505 Section 598....505 Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part applies to all such governments. ...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Governments. 598.505 Section 598....505 Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part applies to all such governments. ...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Governments. 598.505 Section 598.505... Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part applies to all such governments. ...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Governments. 598.505 Section 598.505... Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part applies to all such governments. ...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 1 2011-01-01 2011-01-01 false Governments. 25.501 Section 25.501 Agriculture Office....501 Governments. If more than one State or local government seeks to nominate an area under this part, any reference to or requirement of this part shall apply to all such governments. ...
DESIGN AND PERFORMANCE OF A LOW FLOW RATE INLET
Several ambient air samplers that have been designated by the U. S. EPA as Federal Reference Methods (FRMs) for measuring particulate matter nominally less than 10 um (PM10) include the use of a particular inlet design that aspirates particulate matter from the atmosphere at 1...
ERIC Educational Resources Information Center
Sherwood, David E.
2010-01-01
According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal that serves as a basis for a correction. The main question addressed in the current study was how…
Spatial Lattice Modulation for MIMO Systems
NASA Astrophysics Data System (ADS)
Choi, Jiwook; Nam, Yunseo; Lee, Namyoon
2018-06-01
This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.
Solving the measurement invariance anchor item problem in item response theory.
Meade, Adam W; Wright, Natalie A
2012-09-01
The efficacy of tests of differential item functioning (measurement invariance) has been well established. It is clear that when properly implemented, these tests can successfully identify differentially functioning (DF) items when they exist. However, an assumption of these analyses is that the metric for different groups is linked using anchor items that are invariant. In practice, however, it is impossible to be certain which items are DF and which are invariant. This problem of anchor items, or referent indicators, has long plagued invariance research, and a multitude of suggested approaches have been put forth. Unfortunately, the relative efficacy of these approaches has not been tested. This study compares 11 variations on 5 qualitatively different approaches from recent literature for selecting optimal anchor items. A large-scale simulation study indicates that for nearly all conditions, an easily implemented 2-stage procedure recently put forth by Lopez Rivas, Stark, and Chernyshenko (2009) provided optimal power while maintaining nominal Type I error. With this approach, appropriate anchor items can be easily and quickly located, resulting in more efficacious invariance tests. Recommendations for invariance testing are illustrated using a pedagogical example of employee responses to an organizational culture measure.
Satellite gravity gradient grids for geophysics
Bouman, Johannes; Ebbing, Jörg; Fuchs, Martin; Sebera, Josef; Lieb, Verena; Szwillus, Wolfgang; Haagmans, Roger; Novak, Pavel
2016-01-01
The Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite aimed at determining the Earth’s mean gravity field. GOCE delivered gravity gradients containing directional information, which are complicated to use because of their error characteristics and because they are given in a rotating instrument frame indirectly related to the Earth. We compute gravity gradients in grids at 225 km and 255 km altitude above the reference ellipsoid corresponding to the GOCE nominal and lower orbit phases respectively, and find that the grids may contain additional high-frequency content compared with GOCE-based global models. We discuss the gradient sensitivity for crustal depth slices using a 3D lithospheric model of the North-East Atlantic region, which shows that the depth sensitivity differs from gradient to gradient. In addition, the relative signal power for the individual gradient component changes comparing the 225 km and 255 km grids, implying that using all components at different heights reduces parameter uncertainties in geophysical modelling. Furthermore, since gravity gradients contain complementary information to gravity, we foresee the use of the grids in a wide range of applications from lithospheric modelling to studies on dynamic topography, and glacial isostatic adjustment, to bedrock geometry determination under ice sheets. PMID:26864314
Improved ultrasonic standard reference blocks
NASA Technical Reports Server (NTRS)
Eitzen, D. G.; Sushinsky, G. F.; Chwirut, D. J.; Bechtoldt, C. J.; Ruff, A. W.
1976-01-01
A program to improve the quality, reproducibility and reliability of nondestructive testing through the development of improved ASTM-type ultrasonic reference standards is described. Reference blocks of aluminum, steel, and titanium alloys are to be considered. Equipment representing the state-of-the-art in laboratory and field ultrasonic equipment was obtained and evaluated. RF and spectral data on ten sets of ultrasonic reference blocks have been taken as part of a task to quantify the variability in response from nominally identical blocks. Techniques for residual stress, preferred orientation, and micro-structural measurements were refined and are applied to a reference block rejected by the manufacturer during fabrication in order to evaluate the effect of metallurgical condition on block response. New fabrication techniques for reference blocks are discussed and ASTM activities are summarized.
Gillard, Jonathan
2015-12-01
This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.
Gaia Data Release 1. Astrometry: one billion positions, two million proper motions and parallaxes
NASA Astrophysics Data System (ADS)
Lindegren, L.; Lammers, U.; Bastian, U.; Hernández, J.; Klioner, S.; Hobbs, D.; Bombrun, A.; Michalik, D.; Ramos-Lerate, M.; Butkevich, A.; Comoretto, G.; Joliet, E.; Holl, B.; Hutton, A.; Parsons, P.; Steidelmüller, H.; Abbas, U.; Altmann, M.; Andrei, A.; Anton, S.; Bach, N.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Biermann, M.; Bouquillon, S.; Bourda, G.; Brüsemeister, T.; Bucciarelli, B.; Busonero, D.; Carlucci, T.; Castañeda, J.; Charlot, P.; Clotet, M.; Crosta, M.; Davidson, M.; de Felice, F.; Drimmel, R.; Fabricius, C.; Fienga, A.; Figueras, F.; Fraile, E.; Gai, M.; Garralda, N.; Geyer, R.; González-Vidal, J. J.; Guerra, R.; Hambly, N. C.; Hauser, M.; Jordan, S.; Lattanzi, M. G.; Lenhardt, H.; Liao, S.; Löffler, W.; McMillan, P. J.; Mignard, F.; Mora, A.; Morbidelli, R.; Portell, J.; Riva, A.; Sarasso, M.; Serraller, I.; Siddiqui, H.; Smart, R.; Spagna, A.; Stampa, U.; Steele, I.; Taris, F.; Torra, J.; van Reeven, W.; Vecchiato, A.; Zschocke, S.; de Bruijne, J.; Gracia, G.; Raison, F.; Lister, T.; Marchant, J.; Messineo, R.; Soffel, M.; Osorio, J.; de Torres, A.; O'Mullane, W.
2016-11-01
Context. Gaia Data Release 1 (DR1) contains astrometric results for more than 1 billion stars brighter than magnitude 20.7 based on observations collected by the Gaia satellite during the first 14 months of its operational phase. Aims: We give a brief overview of the astrometric content of the data release and of the model assumptions, data processing, and validation of the results. Methods: For stars in common with the Hipparcos and Tycho-2 catalogues, complete astrometric single-star solutions are obtained by incorporating positional information from the earlier catalogues. For other stars only their positions are obtained, essentially by neglecting their proper motions and parallaxes. The results are validated by an analysis of the residuals, through special validation runs, and by comparison with external data. Results: For about two million of the brighter stars (down to magnitude 11.5) we obtain positions, parallaxes, and proper motions to Hipparcos-type precision or better. For these stars, systematic errors depending for example on position and colour are at a level of ± 0.3 milliarcsecond (mas). For the remaining stars we obtain positions at epoch J2015.0 accurate to 10 mas. Positions and proper motions are given in a reference frame that is aligned with the International Celestial Reference Frame (ICRF) to better than 0.1 mas at epoch J2015.0, and non-rotating with respect to ICRF to within 0.03 mas yr-1. The Hipparcos reference frame is found to rotate with respect to the Gaia DR1 frame at a rate of 0.24 mas yr-1. Conclusions: Based on less than a quarter of the nominal mission length and on very provisional and incomplete calibrations, the quality and completeness of the astrometric data in Gaia DR1 are far from what is expected for the final mission products. The present results nevertheless represent a huge improvement in the available fundamental stellar data and practical definition of the optical reference frame.
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Maruyama, M.; Urano, C.; Kaneko, N.-H.; Rüfenacht, A.
2018-04-01
BIPM’s new transportable programmable Josephson voltage standard (PJVS) has been used for an on-site comparison at the National Metrology Institute of Japan (NMIJ) and the National Institute of Advanced Industrial Science and Technology (AIST) (NMIJ/AIST, hereafter called just NMIJ unless otherwise noted). This is the first time that an array of niobium-based Josephson junctions with amorphous niobium silicon Nb x Si1-x barriers, developed by the National Institute of Standards and Technology4 (NIST), has been directly compared to an array of niobium nitride (NbN)-based junctions (developed by the NMIJ in collaboration with the Nanoelectronics Research Institute (NeRI), AIST). Nominally identical voltages produced by both systems agreed within 5 parts in 1012 (0.05 nV at 10 V) with a combined relative uncertainty of 7.9 × 10-11 (0.79 nV). The low side of the NMIJ apparatus is, by design, referred to the ground potential. An analysis of the systematic errors due to the leakage current to ground was conducted for this ground configuration. The influence of a multi-stage low-pass filter installed at the output measurement leads of the NMIJ primary standard was also investigated. The number of capacitances in parallel in the filter and their insulation resistance have a direct impact on the amplitude of the systematic voltage error introduced by the leakage current, even if the current does not necessarily return to ground. The filtering of the output of the PJVS voltage leads has the positive consequence of protecting the array from external sources of noise. Current noise, when coupled to the array, reduces the width or current range of the quantized voltage steps. The voltage error induced by the leakage current in the filter is an order of magnitude larger than the voltage error in the absence of all filtering, even though the current range of steps is significantly decreased without filtering.
Terminal altitude maximization for Mars entry considering uncertainties
NASA Astrophysics Data System (ADS)
Cui, Pingyuan; Zhao, Zeduan; Yu, Zhengshi; Dai, Juan
2018-04-01
Uncertainties present in the Mars atmospheric entry process may cause state deviations from the nominal designed values, which will lead to unexpected performance degradation if the trajectory is designed merely based on the deterministic dynamic model. In this paper, a linear covariance based entry trajectory optimization method is proposed considering the uncertainties presenting in the initial states and parameters. By extending the elements of the state covariance matrix as augmented states, the statistical behavior of the trajectory is captured to reformulate the performance metrics and path constraints. The optimization problem is solved by the GPOPS-II toolbox in MATLAB environment. Monte Carlo simulations are also conducted to demonstrate the capability of the proposed method. Primary trading performances between the nominal deployment altitude and its dispersion can be observed by modulating the weights on the dispersion penalty, and a compromised result referring to maximizing the 3σ lower bound of the terminal altitude is achieved. The resulting path constraints also show better satisfaction in a disturbed environment compared with the nominal situation.
NASA Technical Reports Server (NTRS)
Alag, Gurbux S.; Gilyard, Glenn B.
1990-01-01
To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.
NASA Astrophysics Data System (ADS)
Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang
2013-11-01
Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.
Avila, F J; Pensado, A; Esteva, C
1996-05-01
To evaluate the accuracy of bibliographic references in REVISTA ESPANOLA DE ANESTESIOLOGIA Y REANIMACION (REDAR) and compare it with other Spanish and international journals. One hundred references were selected at random from those published in REDAR during 1994. A citation was considered correct if there were no differences between it and the original article in any of 6 standard citation times, and if it complied with REDAR citation style. A citation was considered incorrect if there were in fact differences or if REDAR style was not followed. Errors that interfered with direct access to the original were considered serious. Also considered serious were the omission of the first author. Some type of error was detected in 53.9% of the references. Twelve contained a serious error, which on 5 occasions impeded finding the original article and on 6 occasions made direct access difficult. The first author was missing in 1 citation. Errors were found, in order of decreasing frequency, in authors, article titles, journal title, volume, pages and year. A single error was found in 28 citations, 2 were found in 12, 3 were found in 2 and more than 3 were found in 1. REDAR's rate of error in references is comparable to the rates of other Spanish journal, but it is nearly double that of international journals in anesthesiology with higher impact factors (Anesthesiology, Canadian Journal of Anaesthesia). An effort must be made by authors and editors to remedy the situation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... prior to March 1, 2003. Unless otherwise specified, references after that date mean the Director of the... Service forms by one whose remuneration, if any, is nominal and who does not hold himself out as qualified... Office means Executive Office for Immigration Review. (o) The terms director or district director prior...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-31
... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0253] Proposed Information Collection (Non... Docket Management System (FDMS) at www.Regulations.gov or to Nancy J. Kessinger, Veterans Benefits... [email protected] . Please refer to ``OMB Control No. 2900-0253'' in any correspondence. During the...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
Environmental Correlates of Individual Differences in Language Acquisition.
ERIC Educational Resources Information Center
Furrow, David; Nelson, Katherine
1984-01-01
Reports on a study of mothers' uses of nouns and pronouns and their references to objects and persons as environmental variables which might relate to children's nominal preferences. Findings suggest that environmental factors do contribute to stylistic differences in language acquisition and that the communicative functions of language are an…
Acea Nebril, B; Gómez Freijoso, C
1997-03-01
To determine the accuracy of bibliographic citation in Revista Española de Enfermedades Digestivas (REED) and compare it with other Spanish and international journals. We reviewed all 1995 volumes of the REED and randomly selected 100 references from these volumes. Nine citations of non-journal articles were excluded and the remaining 91 citations were carefully scrutinized. Each original article was compared for author's name, title of article, name of journal, volume number, year of publication and pages. Some type of error was detected in 61.6% of the references and on 3 occasions (3.3%) impeded finding to the original article. Errors were found in authors (37.3%); article title (16.4%), pages (6.6%), journal title (4.4%), volume (2.2%) and year (1%). A single error was found in 42 citations, 2 were found in 13 and 3 were found in 1. REED's rate of error in references is comparable to the rates of other spanish and international journals. Authors should exercise more care in preparing bibliographies and should invest more effort in verification of quoted references.
Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A
2017-03-01
Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.
Ramirez, Jorge L.; Birindelli, Jose L.; Carvalho, Daniel C.; Affonso, Paulo R. A. M.; Venere, Paulo C.; Ortega, Hernán; Carrillo-Avila, Mauricio; Rodríguez-Pulido, José A.; Galetti, Pedro M.
2017-01-01
Molecular studies have improved our knowledge on the neotropical ichthyofauna. DNA barcoding has successfully been used in fish species identification and in detecting cryptic diversity. Megaleporinus (Anostomidae) is a recently described freshwater fish genus within which taxonomic uncertainties remain. Here we assessed all nominal species of this genus using a DNA barcode approach (Cytochrome Oxidase subunit I) with a broad sampling to generate a reference library, characterize new molecular lineages, and test the hypothesis that some of the nominal species represent species complexes. The analyses identified 16 (ABGD and BIN) to 18 (ABGD, GMYC, and PTP) different molecular operational taxonomic units (MOTUs) within the 10 studied nominal species, indicating cryptic biodiversity and potential candidate species. Only Megaleporinus brinco, Megaleporinus garmani, and Megaleporinus elongatus showed correspondence between nominal species and MOTUs. Within six nominal species, a subdivision in two MOTUs was found, while Megaleporinus obtusidens was divided in three MOTUs, suggesting that DNA barcode is a very useful approach to identify the molecular lineages of Megaleporinus, even in the case of recent divergence (< 0.5 Ma). Our results thus provided molecular findings that can be used along with morphological traits to better define each species, including candidate new species. This is the most complete analysis of DNA barcode in this recently described genus, and considering its economic value, a precise species identification is quite desirable and fundamental for conservation of the whole biodiversity of this fish. PMID:29075287
NASA Astrophysics Data System (ADS)
Rose, Michael Benjamin
A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.
An extended sequential goodness-of-fit multiple testing method for discrete data.
Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo
2017-10-01
The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.
High coherence plane breaking packaging for superconducting qubits.
Bronn, Nicholas T; Adiga, Vivekananda P; Olivadese, Salvatore B; Wu, Xian; Chow, Jerry M; Pappas, David P
2018-04-01
We demonstrate a pogo pin package for a superconducting quantum processor specifically designed with a nontrivial layout topology (e.g., a center qubit that cannot be accessed from the sides of the chip). Two experiments on two nominally identical superconducting quantum processors in pogo packages, which use commercially available parts and require modest machining tolerances, are performed at low temperature (10 mK) in a dilution refrigerator and both found to behave comparably to processors in standard planar packages with wirebonds where control and readout signals come in from the edges. Single- and two-qubit gate errors are also characterized via randomized benchmarking, exhibiting similar error rates as in standard packages, opening the possibility of integrating pogo pin packaging with extensible qubit architectures.
High coherence plane breaking packaging for superconducting qubits
NASA Astrophysics Data System (ADS)
Bronn, Nicholas T.; Adiga, Vivekananda P.; Olivadese, Salvatore B.; Wu, Xian; Chow, Jerry M.; Pappas, David P.
2018-04-01
We demonstrate a pogo pin package for a superconducting quantum processor specifically designed with a nontrivial layout topology (e.g., a center qubit that cannot be accessed from the sides of the chip). Two experiments on two nominally identical superconducting quantum processors in pogo packages, which use commercially available parts and require modest machining tolerances, are performed at low temperature (10 mK) in a dilution refrigerator and both found to behave comparably to processors in standard planar packages with wirebonds where control and readout signals come in from the edges. Single- and two-qubit gate errors are also characterized via randomized benchmarking, exhibiting similar error rates as in standard packages, opening the possibility of integrating pogo pin packaging with extensible qubit architectures.
Evaluation of a load cell model for dynamic calibration of the rotor systems research aircraft
NASA Technical Reports Server (NTRS)
Duval, R. W.; Bahrami, H.; Wellman, B.
1985-01-01
The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission system from the fuselage. An analytical model of the relationship between applied rotor loads and the resulting load cell measurements is derived by applying a force-and-moment balance to the isolated rotor/transmission system. The model is then used to estimate the applied loads from measured load cell data, as obtained from a ground-based shake test. Using nominal design values for the parameters, the estimation errors, for the case of lateral forcing, were shown to be on the order of the sensor measurement noise in all but the roll axis. An unmodeled external load appears to be the source of the error in this axis.
Measuring Compositions in Organic Depth Profiling: Results from a VAMAS Interlaboratory Study.
Shard, Alexander G; Havelund, Rasmus; Spencer, Steve J; Gilmore, Ian S; Alexander, Morgan R; Angerer, Tina B; Aoyagi, Satoka; Barnes, Jean-Paul; Benayad, Anass; Bernasik, Andrzej; Ceccone, Giacomo; Counsell, Jonathan D P; Deeks, Christopher; Fletcher, John S; Graham, Daniel J; Heuser, Christian; Lee, Tae Geol; Marie, Camille; Marzec, Mateusz M; Mishra, Gautam; Rading, Derk; Renault, Olivier; Scurr, David J; Shon, Hyun Kyong; Spampinato, Valentina; Tian, Hua; Wang, Fuyi; Winograd, Nicholas; Wu, Kui; Wucher, Andreas; Zhou, Yufan; Zhu, Zihua; Cristaudo, Vanina; Poleunis, Claude
2015-08-20
We report the results of a VAMAS (Versailles Project on Advanced Materials and Standards) interlaboratory study on the measurement of composition in organic depth profiling. Layered samples with known binary compositions of Irganox 1010 and either Irganox 1098 or Fmoc-pentafluoro-l-phenylalanine in each layer were manufactured in a single batch and distributed to more than 20 participating laboratories. The samples were analyzed using argon cluster ion sputtering and either X-ray photoelectron spectroscopy (XPS) or time-of-flight secondary ion mass spectrometry (ToF-SIMS) to generate depth profiles. Participants were asked to estimate the volume fractions in two of the layers and were provided with the compositions of all other layers. Participants using XPS provided volume fractions within 0.03 of the nominal values. Participants using ToF-SIMS either made no attempt, or used various methods that gave results ranging in error from 0.02 to over 0.10 in volume fraction, the latter representing a 50% relative error for a nominal volume fraction of 0.2. Error was predominantly caused by inadequacy in the ability to compensate for primary ion intensity variations and the matrix effect in SIMS. Matrix effects in these materials appear to be more pronounced as the number of atoms in both the primary analytical ion and the secondary ion increase. Using the participants' data we show that organic SIMS matrix effects can be measured and are remarkably consistent between instruments. We provide recommendations for identifying and compensating for matrix effects. Finally, we demonstrate, using a simple normalization method, that virtually all ToF-SIMS participants could have obtained estimates of volume fraction that were at least as accurate and consistent as XPS.
Measuring Compositions in Organic Depth Profiling: Results from a VAMAS Interlaboratory Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shard, A. G.; Havelund, Rasmus; Spencer, Steve J.
We report the results of a VAMAS (Versailles Project on Advanced Materials and Standards) interlaboratory study on the measurement of composition in organic depth profiling. Layered samples with known binary compositions of Irganox 1010 and either Irganox 1098 or Fmoc-pentafluoro-L-phenylalanine in each layer were manufactured in a single batch and distributed to more than 20 participating laboratories. The samples were analyzed using argon cluster ion sputtering and either X-ray Photoelectron Spectroscopy (XPS) or Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) to generate depth profiles. Participants were asked to estimate the volume fractions in two of the layers and were provided withmore » the compositions of all other layers. Participants using XPS provided volume fractions within 0.03 of the nominal values. Participants using ToF-SIMS either made no attempt, or used various methods that gave results ranging in error from 0.02 to over 0.10 in volume fraction, the latter representing a 50% relative error for a nominal volume fraction of 0.2. Error was predominantly caused by inadequacy in the ability to compensate for primary ion intensity variations and the matrix effect in SIMS. Matrix effects in these materials appear to be more pronounced as the number of atoms in both the primary analytical ion and the secondary ion increase. Using the participants’ data we show that organic SIMS matrix effects can be measured and are remarkably consistent between instruments. We provide recommendations for identifying and compensating for matrix effects. Finally we demonstrate, using a simple normalization method, that virtually all ToF-SIMS participants could have obtained estimates of volume fraction that were at least as accurate and consistent as XPS.« less
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
Formulation of a strategy for monitoring control integrity in critical digital control systems
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe
1991-01-01
Advanced aircraft will require flight critical computer systems for stability augmentation as well as guidance and control that must perform reliably in adverse, as well as nominal, operating environments. Digital system upset is a functional error mode that can occur in electromagnetically harsh environments, involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. A strategy is presented for dynamic upset detection to be used in the evaluation of critical digital controllers during the design and/or validation phases of development. Critical controllers must be able to be used in adverse environments that result from disturbances caused by an electromagnetic source such as lightning, high intensity radiated field (HIRF), and nuclear electromagnetic pulses (NEMP). The upset detection strategy presented provides dynamic monitoring of a given control computer for degraded functional integrity that can result from redundancy management errors and control command calculation error that could occur in an electromagnetically harsh operating environment. The use is discussed of Kalman filtering, data fusion, and decision theory in monitoring a given digital controller for control calculation errors, redundancy management errors, and control effectiveness.
Effects of random tooth profile errors on the dynamic behaviors of planetary gears
NASA Astrophysics Data System (ADS)
Xun, Chao; Long, Xinhua; Hua, Hongxing
2018-02-01
In this paper, a nonlinear random model is built to describe the dynamics of planetary gear trains (PGTs), in which the time-varying mesh stiffness, tooth profile modification (TPM), tooth contact loss, and random tooth profile error are considered. A stochastic method based on the method of multiple scales (MMS) is extended to analyze the statistical property of the dynamic performance of PGTs. By the proposed multiple-scales based stochastic method, the distributions of the dynamic transmission errors (DTEs) are investigated, and the lower and upper bounds are determined based on the 3σ principle. Monte Carlo method is employed to verify the proposed method. Results indicate that the proposed method can be used to determine the distribution of the DTE of PGTs high efficiently and allow a link between the manufacturing precision and the dynamical response. In addition, the effects of tooth profile modification on the distributions of vibration amplitudes and the probability of tooth contact loss with different manufacturing tooth profile errors are studied. The results show that the manufacturing precision affects the distribution of dynamic transmission errors dramatically and appropriate TPMs are helpful to decrease the nominal value and the deviation of the vibration amplitudes.
Computation of misalignment and primary mirror astigmatism figure error of two-mirror telescopes
NASA Astrophysics Data System (ADS)
Gu, Zhiyuan; Wang, Yang; Ju, Guohao; Yan, Changxiang
2018-01-01
Active optics usually uses the computation models based on numerical methods to correct misalignments and figure errors at present. These methods can hardly lead to any insight into the aberration field dependencies that arise in the presence of the misalignments. An analytical alignment model based on third-order nodal aberration theory is presented for this problem, which can be utilized to compute the primary mirror astigmatic figure error and misalignments for two-mirror telescopes. Alignment simulations are conducted for an R-C telescope based on this analytical alignment model. It is shown that in the absence of wavefront measurement errors, wavefront measurements at only two field points are enough, and the correction process can be completed with only one alignment action. In the presence of wavefront measurement errors, increasing the number of field points for wavefront measurements can enhance the robustness of the alignment model. Monte Carlo simulation shows that, when -2 mm ≤ linear misalignment ≤ 2 mm, -0.1 deg ≤ angular misalignment ≤ 0.1 deg, and -0.2 λ ≤ astigmatism figure error (expressed as fringe Zernike coefficients C5 / C6, λ = 632.8 nm) ≤0.2 λ, the misaligned systems can be corrected to be close to nominal state without wavefront testing error. In addition, the root mean square deviation of RMS wavefront error of all the misaligned samples after being corrected is linearly related to wavefront testing error.
Statistics of the radiated field of a space-to-earth microwave power transfer system
NASA Technical Reports Server (NTRS)
Stevens, G. H.; Leininger, G.
1976-01-01
Statistics such as average power density pattern, variance of the power density pattern and variance of the beam pointing error are related to hardware parameters such as transmitter rms phase error and rms amplitude error. Also a limitation on spectral width of the phase reference for phase control was established. A 1 km diameter transmitter appears feasible provided the total rms insertion phase errors of the phase control modules does not exceed 10 deg, amplitude errors do not exceed 10% rms, and the phase reference spectral width does not exceed approximately 3 kHz. With these conditions the expected radiation pattern is virtually the same as the error free pattern, and the rms beam pointing error would be insignificant (approximately 10 meters).
Error assessment of local tie vectors in space geodesy
NASA Astrophysics Data System (ADS)
Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald
2014-05-01
For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.
Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface.
McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim
2016-09-01
We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.
NASA Astrophysics Data System (ADS)
Esquinca, Alberto
This is a study of language use in the context of an inquiry-based science curriculum in which conceptual understanding ratings are used split texts into groups of "successful" and "unsuccessful" texts. "Successful" texts could include known features of science language. 420 texts generated by students in 14 classrooms from three school districts, culled from a prior study on the effectiveness of science notebooks to assess understanding, in addition to the aforementioned ratings are the data sources. In science notebooks, students write in the process of learning (here, a unit on electricity). The analytical framework is systemic functional linguistics (Halliday and Matthiessen, 2004; Eggins, 2004), specifically the concepts of genre, register and nominalization. Genre classification involves an analysis of the purpose and register features in the text (Schleppegrell, 2004). The use of features of the scientific academic register, namely the use relational processes and nominalization (Halliday and Martin, 1993), requires transitivity analysis and noun analysis. Transitivity analysis, consisting of the identification of the process type, is conducted on 4737 ranking clauses. A manual count of each noun used in the corpus allows for a typology of nouns. Four school science genres, procedures, procedural recounts reports and explanations, are found. Most texts (85.4%) are factual, and 14.1% are classified as explanations, the analytical genre. Logistic regression analysis indicates that there is no significant probability that the texts classified as explanation are placed in the group of "successful" texts. In addition, material process clauses predominate in the corpus, followed by relational process clauses. Results of a logistic regression analysis indicate that there is a significant probability (Chi square = 15.23, p < .0001) that texts with a high rate of relational processes are placed in the group of "successful" texts. In addition, 59.5% of 6511 nouns are references to physical materials, followed by references to abstract concepts (35.54%). Only two of the concept nouns were found to be nominalized referents in definition model sentences. In sum, the corpus has recognizable genres and features science language, and relational processes are more prevalent in "successful" texts. However, the pervasive feature of science language, nominalization, is scarce.
Adaptive Beamforming Algorithms for High Resolution Microwave Imaging
1991-04-01
frequency- and phase -locked. With a system of radio camera size it must be assumed that oscillators will drift and, similarly, that electronic circuits in...propagation-induced phase errors an array as large as the one under discussion is likely to experience differ- ent weather conditions across it. The nominal...human optical system. Such a passing-scene display with human optical resolving power would be available to the air - man at night as well as during the
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John
2012-01-01
Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.
Pearson, Richard
2011-03-01
To assess the possibility of estimating the refractive index of rigid contact lenses on the basis of measurements of their back vertex power (BVP) in air and when immersed in liquid. First, a spreadsheet model was used to quantify the magnitude of errors arising from simulated inaccuracies in the variables required to calculate refractive index. Then, refractive index was calculated from in-air and in-liquid measurements of BVP of 21 lenses that had been made in three negative BVPs from materials with seven different nominal refractive index values. The power measurements were made by two operators on two occasions. Intraobserver reliability showed a mean difference of 0.0033±0.0061 (t = 0.544, P = 0.59), interobserver reliability showed a mean difference of 0.0043±0.0061 (t = 0.707, P = 0.48), and the mean difference between the nominal and calculated refractive index values was -0.0010±0.0111 (t = -0.093, P = 0.93). The spreadsheet prediction that low-powered lenses might be subject to greater errors in the calculated values of refractive index was substantiated by the experimental results. This method shows good intra and interobserver reliabilities and can be used easily in a clinical setting to provide an estimate of the refractive index of rigid contact lenses having a BVP of 3 D or more.
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.; Shaklan, Stuart B.
2009-01-01
This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.
Technique-Dependent Errors in the Satellite Laser Ranging Contributions to the ITRF
NASA Astrophysics Data System (ADS)
Pavlis, Erricos C.; Kuzmicz-Cieslak, Magdalena; König, Daniel
2013-04-01
Over the past decade Satellite Laser Ranging (SLR) has focused on its unique strength of providing accurate observations of the origin and scale of the International Terrestrial Reference Frame (ITRF). The origin of the ITRF is defined to coincide with the center of mass of the Earth system (geocenter). SLR realizes this origin as the focal point of the tracked satellite orbits, and being the only (nominally) unbiased ranging technique, it provides the best realization for it. The goal of GGOS is to provide an ITRF with accuracy at epoch of 1 mm or better and a stability of 0.1 mm/y. In order to meet this stringent goal, Space Geodesy is taking a two-pronged approach: modernizing the engineering components (ground and space segments), and revising the modeling standards to take advantage of recent improvements in many areas of geophysical modeling for system Earth components. As we gain improved understanding of the Earth system components, space geodesy adjusts its underlying modeling of the system to better and more completely describe it. Similarly, from the engineering side we examine the observational process for improvement of the calibration and reduction procedures that will enhance the accuracy of the individual observations thence the final SLR products. Two areas that are currently under scrutiny are (a) the station-dependent and tracking-mode-dependent correction of the observations for the "center-of-mass-offset" of each satellite target, and (b) the station- and pass-dependent correction for the calibrated delay that refers each measurement to the nominal "zero" of the instrument. The former affects primarily the accuracy of the scale definition, while the latter affects both, the scale and the origin. However, because of the non-uniform data volume and non-symmetric geographic locations of the SLR stations, the major impact of the latter is on the definition of the origin. The ILRS is currently investigating the quality of models available for the correction of the center-of-mass offset for the primary targets contributing to the ITRF and the impact of their application on the final products, which we will discuss with examples. The second source of error is more complex, primarily due to the fact that almost each of the current stations is a unique case and quality of the applied delays must be assessed on a case-by-case basis. We will examine typical series of these corrections for some of the most important sites of the network. The current practice in SLR contribution to ITRF is to provide a "snapshot" ITRF realization from the analysis of arcs spanning one week, selected as a compromise between the requirement for an accurate enough realization of the site positions and a short enough interval to minimize biasing the estimate from mass redistributions over that interval. A comparison of these weekly realizations to the static definition of the ITRF origin results in the so-called "geocenter variation" time series. Fitting a model for the dominant frequencies in the series, allows one to extend this model for future and past time-intervals not covered by the observations. We will present and compare geocenter variations series based on different modeling underlying our SLR analysis, using the ITRF2008 as the reference.
Technique-dependent Errors in the Realization of the ITRF Origin From Satellite Laser Ranging
NASA Astrophysics Data System (ADS)
Pavlis, E. C.; Kuzmicz-Cieslak, M.
2012-12-01
Over the past decade Satellite Laser Ranging (SLR) has focused on its unique strength of providing accurate observations of the origin and scale of the International Terrestrial Reference Frame (ITRF). The origin of the ITRF is defined to coincide with the center of mass of the Earth system (geocenter). SLR realizes this origin as the focal point of the tracked satellite orbits, and being the only (nominally) unbiased ranging technique, it provides the best realization for it. The goal of GGOS is to provide an ITRF with accuracy at epoch of 1 mm or better and a stability of 0.1 mm/y. In order to meet this stringent goal, Space Geodesy is taking a two-pronged approach: modernizing the engineering components (ground and space segments), and revising the modeling standards to take advantage of recent improvements in many areas of geophysical modeling for system Earth components. As we gain improved understanding of the Earth system components, space geodesy adjusts its underlying modeling of the system to better and more completely describe it. Similarly, from the engineering side we examine the observational process for improvement of the calibration and reduction procedures that will enhance the accuracy of the individual observations thence the final SLR products. Two areas that are currently under scrutiny are (a) the station-dependent and tracking-mode-dependent correction of the observations for the "center-of-mass-offset" of each satellite target, and (b) the station- and pass-dependent correction for the calibrated delay that refers each measurement to the nominal "zero" of the instrument. The former affects primarily the accuracy of the scale definition, while the latter affects both, the scale and the origin. However, because of the non-uniform data volume and non-symmetric geographic locations of the SLR stations, the major impact of the latter is on the definition of the origin. The ILRS is currently investigating the quality of models available for the correction of the center-of-mass offset for the primary targets contributing to the ITRF and the impact of their application on the final products, which we will discuss with examples. The second source of error is more complex, primarily due to the fact that almost each of the current stations is a unique case and quality of the applied delays must be assessed on a case-by-case basis. We will examine typical series of these corrections for some of the most important sites of the network. The current practice in SLR contribution to ITRF is to provide a "snapshot" ITRF realization from the analysis of arcs spanning one week, selected as a compromise between the requirement for an accurate enough realization of the site positions and a short enough interval to minimize biasing the estimate from mass redistributions over that interval. A comparison of these weekly realizations to the static definition of the ITRF origin results in the so-called "geocenter variation" time series. Fitting a model for the dominant frequencies in the series, allows one to extend this model for future and past time-intervals not covered by the observations. We will present and compare geocenter variations series based on different modeling underlying our SLR analysis, using the ITRF2008 as the reference.
An Expert System for the Evaluation of Cost Models
1990-09-01
contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John
Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands
Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.
2000-01-01
The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc
Apparatus and Method to Enable Precision and Fast Laser Frequency Tuning
NASA Technical Reports Server (NTRS)
Chen, Jeffrey R. (Inventor); Numata, Kenji (Inventor); Wu, Stewart T. (Inventor); Yang, Guangning (Inventor)
2015-01-01
An apparatus and method is provided to enable precision and fast laser frequency tuning. For instance, a fast tunable slave laser may be dynamically offset-locked to a reference laser line using an optical phase-locked loop. The slave laser is heterodyned against a reference laser line to generate a beatnote that is subsequently frequency divided. The phase difference between the divided beatnote and a reference signal may be detected to generate an error signal proportional to the phase difference. The error signal is converted into appropriate feedback signals to phase lock the divided beatnote to the reference signal. The slave laser frequency target may be rapidly changed based on a combination of a dynamically changing frequency of the reference signal, the frequency dividing factor, and an effective polarity of the error signal. Feed-forward signals may be generated to accelerate the slave laser frequency switching through laser tuning ports.
NASA Technical Reports Server (NTRS)
Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.
2006-01-01
The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.
ERIC Educational Resources Information Center
Rousseau, Ronald
1992-01-01
Proposes a mathematical model to explain the observed concentration or diversity of nominal classes in information retrieval systems. The Lorenz Curve is discussed, Information Production Process (IPP) is explained, and a heuristic explanation of circumstances in which the model might be used is offered. (30 references) (LRW)
The purpose of the Mississippi River map series is to provide reference for ecological vulnerability throughout the entire Mississippi River Basin, which is a forthcoming product. This map series product consists of seven 32 inch x 40 inch posters, with a nominal scale of 1 inch ...
Relative Efficacy of Behavioral Interventions in Preschool Children Attending Head Start
ERIC Educational Resources Information Center
Bellone, Katherine M.; Dufrene, Brad A.; Tingstrom, Daniel H.; Olmi, D. Joe; Barry, Christopher
2014-01-01
This study tested the relative efficacy of two interventions for children referred for consultation services due to problem behavior in the classroom. Teachers nominated children for participation due to frequent disruptive behaviors, such as inappropriate vocalizations and off-task behavior. Four Black males from 3 to 4 years old who attended…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-25
... nominal, and for which duplicative fees are not charged. B. Marketing by a Real Estate Broker or Agent Directed to Particular Homebuyers or Sellers In some circumstances, marketing services performed on behalf... position to refer settlement service business and through marketing can affirmatively influence a homebuyer...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Governments. 597.501 Section 597... Special Rules § 597.501 Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part shall apply to all such governments. ...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Governments. 597.501 Section 597... Special Rules § 597.501 Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part shall apply to all such governments. ...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Governments. 597.501 Section 597... Special Rules § 597.501 Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part shall apply to all such governments. ...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Governments. 597.501 Section 597.501... Rules § 597.501 Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part shall apply to all such governments. ...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Governments. 597.501 Section 597.501... Rules § 597.501 Governments. If more than one State or local government seeks to nominate an urban area under this part, any reference to or requirement of this part shall apply to all such governments. ...
Strategic Control over Extent and Timing of Distractor-Based Response Activation
ERIC Educational Resources Information Center
Jost, Kerstin; Wendt, Mike; Luna-Rodriguez, Aquiles; Löw, Andreas; Jacobsen, Thomas
2017-01-01
In choice reaction time (RT) tasks, performance is often influenced by the presence of nominally irrelevant stimuli, referred to as distractors. Recent research provided evidence that distractor processing can be adjusted to the utility of the distractors: Distractors predictive of the upcoming target/response were more attended to and also…
Physical Chemistry of Exothermic Gas-Aerosol Calaorimetry.
1985-01-01
CALORTMFTRY 1. !NTRPODUCTION Infrared radiaton n the atmosphere above normA , backround level.s (an t # produced in a variety of ways. For example, combustion...measured by a thermistor is 72 0C, the sae as that of the reference targets. This ’nominal’ temperature however is not necessarily either the drop or
NASA Technical Reports Server (NTRS)
Seale, R. H.
1979-01-01
The prediction of the SRB and ET impact areas requires six separate processors. The SRB impact prediction processor computes the impact areas and related trajectory data for each SRB element. Output from this processor is stored on a secure file accessible by the SRB impact plot processor which generates the required plots. Similarly the ET RTLS impact prediction processor and the ET RTLS impact plot processor generates the ET impact footprints for return-to-launch-site (RTLS) profiles. The ET nominal/AOA/ATO impact prediction processor and the ET nominal/AOA/ATO impact plot processor generate the ET impact footprints for non-RTLS profiles. The SRB and ET impact processors compute the size and shape of the impact footprints by tabular lookup in a stored footprint dispersion data base. The location of each footprint is determined by simulating a reference trajectory and computing the reference impact point location. To insure consistency among all flight design system (FDS) users, much input required by these processors will be obtained from the FDS master data base.
NASA Astrophysics Data System (ADS)
Tiwari, Shivendra N.; Padhi, Radhakant
2018-01-01
Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Guo, Ying; Little, Roderick J; McConnell, Daniel S
2012-01-01
Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.
Perceived barriers to medical-error reporting: an exploratory investigation.
Uribe, Claudia L; Schweikhart, Sharon B; Pathak, Dev S; Dow, Merrell; Marsh, Gail B
2002-01-01
Medical-error reporting is an essential component for patient safety enhancement. Unfortunately, medical errors are largely underreported across healthcare institutions. This problem can be attributed to different factors and barriers present at organizational and individual levels that ultimately prevent individuals from generating the report. This study explored the factors that affect medical-error reporting among physicians and nurses at a large academic medical center located in the midwest United States. A nominal group session was conducted to identify the most relevant factors that act as barriers for error reporting. These factors were then used to design a questionnaire that explored the likelihood of the factors to act as barriers and their likelihood to be modified. Using these two parameters, the results were analyzed and combined into a Factor Relevance Matrix. The matrix identifies the factors for which immediate actions should be undertaken to improve medical-error reporting (immediate action factors). It also identifies factors that require long-term strategies (long-term strategy factors) as well as factors that the organization should be aware of but that are of lower priority (awareness factors). The strategies outlined in this study may assist healthcare organizations in improving medical-error reporting, as part of the efforts toward patient-safety enhancement. Although factors affecting medical-error reporting may vary between different organizations, the process used in identifying the factors and the Factor Relevance Matrix developed in this study are easily adaptable to any organizational setting.
NASA Technical Reports Server (NTRS)
Mercer, Joey; Callantine, Todd; Martin, Lynne
2012-01-01
A recent human-in-the-loop simulation in the Airspace Operations Laboratory (AOL) at NASA's Ames Research Center investigated the robustness of Controller-Managed Spacing (CMS) operations. CMS refers to AOL-developed controller tools and procedures for enabling arrivals to conduct efficient Optimized Profile Descents with sustained high throughput. The simulation provided a rich data set for examining how a traffic management supervisor and terminal-area controller participants used the CMS tools and coordinated to respond to off-nominal events. This paper proposes quantitative measures for characterizing the participants responses. Case studies of go-around events, replicated during the simulation, provide insights into the strategies employed and the role the CMS tools played in supporting them.
ROSAT in-orbit attitude measurement recovery
NASA Astrophysics Data System (ADS)
Kaffer, L.; Boeinghoff, A.; Bruederle, E.; Schrempp, W.; Wullstein, P.
After about 7 months of nearly perfect Attitude Measurement and Control System (AMCS) functioning, the ROSAT mission was influenced by gyro degradations which complicated the operation and after one year the nominal mission could no longer be maintained. The reestablishment of the nominal mission by the redesign of the attitude measurement using inertial reference generation from coarse Sun sensor and magnetometer together with a new star acquisition procedure is described. This success was only possible because sufficient reprogramming provisions in the onboard computer were available. The new software now occupies nearly the complete Random Access Memory (RAM) area and increases the computation time from about 50 msec to 300 msec per 1 sec cycle. This proves that deficiencies of the hardware can be overcome by a more intelligent software.
Accuracy of references and quotations in veterinary journals.
Hinchcliff, K W; Bruce, N J; Powers, J D; Kipp, M L
1993-02-01
The accuracy of references and quotations used to substantiate statements of fact in articles published in 6 frequently cited veterinary journals was examined. Three hundred references were randomly selected, and the accuracy of each citation was examined. A subset of 100 references was examined for quotational accuracy; ie, the accuracy with which authors represented the work or assertions of the author being cited. Of the 300 references selected, 295 were located, and 125 major errors were found in 88 (29.8%) of them. Sixty-seven (53.6%) major errors were found involving authors, 12 (9.6%) involved the article title, 14 (11.2%) involved the book or journal title, and 32 (25.6%) involved the volume number, date, or page numbers. Sixty-eight minor errors were detected. The accuracy of 111 quotations from 95 citations in 65 articles was examined. Nine quotations were technical and not classified, 86 (84.3%) were classified as correct, 2 (1.9%) contained minor misquotations, and 14 (13.7%) contained major misquotations. We concluded that misquotations and errors in citations occur frequently in veterinary journals, but at a rate similar to that reported for other biomedical journals.
CHNO Energetic Polymer Specific Heat Prediction From The Proposed Nominal/Generic (N/G) CP Concept
2007-02-01
HMX can exist in different solid polymorphic forms. At a certain temperature, TT, one form may change to another form if the heat energy of...more than 100 °K for TNT, HNS and HMX and over 200 °K for TETRYL, PETN, and RDX ). So based on the above remarks and similar remarks in References...are very close to (or equal to) the RDX CP values and TNT CP values near absolute zero. In Reference 7, two examples (TNT and HMX ) were selected for
Dispersion analysis for baseline reference mission 2
NASA Technical Reports Server (NTRS)
Snow, L. S.
1975-01-01
A dispersion analysis considering uncertainties (or perturbations) in platform, vehicle, and environmental parameters was performed for baseline reference mission (BRM) 2. The dispersion analysis is based on the nominal trajectory for BRM 2. The analysis was performed to determine state vector and performance dispersions (or variations) which result from the indicated uncertainties. The dispersions are determined at major mission events and fixed times from liftoff (time slices). The dispersion results will be used to evaluate the capability of the vehicle to perform the mission within a specified level of confidence and to determine flight performance reserves.
Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update
NASA Technical Reports Server (NTRS)
1971-01-01
Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.
Effect of bird maneuver on frequency-domain helicopter EM response
Fitterman, D.V.; Yin, C.
2004-01-01
Bird maneuver, the rotation of the coil-carrying instrument pod used for frequency-domain helicopter electromagnetic surveys, changes the nominal geometric relationship between the bird-coil system and the ground. These changes affect electromagnetic coupling and can introduce errors in helicopter electromagnetic, (HEM) data. We analyze these effects for a layered half-space for three coil configurations: vertical coaxial, vertical coplanar, and horizontal coplanar. Maneuver effect is shown to have two components: one that is purely geometric and another that is inductive in nature. The geometric component is significantly larger. A correction procedure is developed using an iterative approach that uses standard HEM inversion routines. The maneuver effect correction reduces inversion misfit error and produces laterally smoother cross sections than obtained from uncorrected data. ?? 2004 Society of Exploration Geophysicists. All rights reserved.
Deep space target location with Hubble Space Telescope (HST) and Hipparcos data
NASA Technical Reports Server (NTRS)
Null, George W.
1988-01-01
Interplanetary spacecraft navigation requires accurate a priori knowledge of target positions. A concept is presented for attaining improved target ephemeris accuracy using two future Earth-orbiting optical observatories, the European Space Agency (ESA) Hipparcos observatory and the Nasa Hubble Space Telescope (HST). Assuming nominal observatory performance, the Hipparcos data reduction will provide an accurate global star catalog, and HST will provide a capability for accurate angular measurements of stars and solar system bodies. The target location concept employs HST to observe solar system bodies relative to Hipparcos catalog stars and to determine the orientation (frame tie) of these stars to compact extragalactic radio sources. The target location process is described, the major error sources discussed, the potential target ephemeris error predicted, and mission applications identified. Preliminary results indicate that ephemeris accuracy comparable to the errors in individual Hipparcos catalog stars may be possible with a more extensive HST observing program. Possible future ground and spacebased replacements for Hipparcos and HST astrometric capabilities are also discussed.
NASA Technical Reports Server (NTRS)
Marr, Greg C.
2003-01-01
The Triana spacecraft was designed to be launched by the Space Shuttle. The nominal Triana mission orbit will be a Sun-Earth L1 libration point orbit. Using the NASA Goddard Space Flight Center's Orbit Determination Error Analysis System (ODEAS), orbit determination (OD) error analysis results are presented for all phases of the Triana mission from the first correction maneuver through approximately launch plus 6 months. Results are also presented for the science data collection phase of the Fourier Kelvin Stellar Interferometer Sun-Earth L2 libration point mission concept with momentum unloading thrust perturbations during the tracking arc. The Triana analysis includes extensive analysis of an initial short arc orbit determination solution and results using both Deep Space Network (DSN) and commercial Universal Space Network (USN) statistics. These results could be utilized in support of future Sun-Earth libration point missions.
Beam masking to reduce cyclic error in beam launcher of interferometer
NASA Technical Reports Server (NTRS)
Ames, Lawrence L. (Inventor); Bell, Raymond Mark (Inventor); Dutta, Kalyan (Inventor)
2005-01-01
Embodiments of the present invention are directed to reducing cyclic error in the beam launcher of an interferometer. In one embodiment, an interferometry apparatus comprises a reference beam directed along a reference path, and a measurement beam spatially separated from the reference beam and being directed along a measurement path contacting a measurement object. The reference beam and the measurement beam have a single frequency. At least a portion of the reference beam and at least a portion of the measurement beam overlapping along a common path. One or more masks are disposed in the common path or in the reference path and the measurement path to spatially isolate the reference beam and the measurement beam from one another.
Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.
Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai
2014-12-18
A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.
NASA Astrophysics Data System (ADS)
Kestens, Vikram; Bozatzidis, Vassili; De Temmerman, Pieter-Jan; Ramaye, Yannic; Roebben, Gert
2017-08-01
Particle tracking analysis (PTA) is an emerging technique suitable for size analysis of particles with external dimensions in the nano- and sub-micrometre scale range. Only limited attempts have so far been made to investigate and quantify the performance of the PTA method for particle size analysis. This article presents the results of a validation study during which selected colloidal silica and polystyrene latex reference materials with particle sizes in the range of 20 nm to 200 nm were analysed with NS500 and LM10-HSBF NanoSight instruments and video analysis software NTA 2.3 and NTA 3.0. Key performance characteristics such as working range, linearity, limit of detection, limit of quantification, sensitivity, robustness, precision and trueness were examined according to recommendations proposed by EURACHEM. A model for measurement uncertainty estimation following the principles described in ISO/IEC Guide 98-3 was used for quantifying random and systematic variations. For nominal 50 nm and 100 nm polystyrene and a nominal 80 nm silica reference materials, the relative expanded measurement uncertainties for the three measurands of interest, being the mode, median and arithmetic mean of the number-weighted particle size distribution, varied from about 10% to 12%. For the nominal 50 nm polystyrene material, the relative expanded uncertainty of the arithmetic mean of the particle size distributions increased up to 18% which was due to the presence of agglomerates. Data analysis was performed with software NTA 2.3 and NTA 3.0. The latter showed to be superior in terms of sensitivity and resolution.
ERIC Educational Resources Information Center
Ives, William; Houseworth, Marguerite
Aspects of children's early representational drawing ability may provide evidence for feature marking in non-linguistic symbol systems. To test this assumption children in kindergarten, second, and fourth grade were asked to draw a set of referent objects in three conditions: a nominal or standard condition with no implied relationship ("two…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-25
..., 1201 Eye St., NW., MS 1242, Washington, DC 20005 (mail); or [email protected] (e-mail). Please reference Information Collection 1024- 0018. FOR FURTHER INFORMATION CONTACT: Lisa Deline, NPS Historian, National Register of Historic Places, 1201 Eye St., NW, 20005. You may send an e-mail to Lisa[email protected
75 FR 73080 - Science Advisory Board Staff Office; Request for Nominations of Experts for the SAB...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-29
... are IRIS reference doses (RfDs) for two commercial PCB mixtures: Aroclor 1016 and Aroclor 1254 that... developing a draft assessment of the potential noncancer health hazards of complex PCB mixtures for inclusion... with the goal of establishing an RfD for application to complex PCB mixtures. The EPA's National Center...
Schultze, A E; Irizarry, A R
2017-02-01
Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, Ahmed A., E-mail: asaleh@uow.edu.au
Even with the use of X-ray polycapillary lenses, sample tilting during pole figure measurement results in a decrease in the recorded X-ray intensity. The magnitude of this error is affected by the sample size and/or the finite detector size. These errors can be typically corrected by measuring the intensity loss as a function of the tilt angle using a texture-free reference sample (ideally made of the same alloy as the investigated material). Since texture-free reference samples are not readily available for all alloys, the present study employs an empirical procedure to estimate the correction curve for a particular experimental configuration.more » It involves the use of real texture-free reference samples that pre-exist in any X-ray diffraction laboratory to first establish the empirical correlations between X-ray intensity, sample tilt and their Bragg angles and thereafter generate correction curves for any Bragg angle. It will be shown that the empirically corrected textures are in very good agreement with the experimentally corrected ones. - Highlights: •Sample tilting during X-ray pole figure measurement leads to intensity loss errors. •Texture-free reference samples are typically used to correct the pole figures. •An empirical correction procedure is proposed in the absence of reference samples. •The procedure relies on reference samples that pre-exist in any texture laboratory. •Experimentally and empirically corrected textures are in very good agreement.« less
Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca
NASA Astrophysics Data System (ADS)
Matteo, N. A.; Morton, Y. T.
2010-12-01
The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.
MFP scanner motion characterization using self-printed target
NASA Astrophysics Data System (ADS)
Kim, Minwoong; Bauer, Peter; Wagner, Jerry K.; Allebach, Jan P.
2015-01-01
Multifunctional printers (MFP) are products that combine the functions of a printer, scanner, and copier. Our goal is to help customers to be able to easily diagnose scanner or print quality issues with their products by developing an automated diagnostic system embedded in the product. We specifically focus on the characterization of scanner motions, which may be defective due to irregular movements of the scan-head. The novel design of our test page and two-stage diagnostic algorithm are described in this paper. The most challenging issue is to evaluate the scanner performance properly when both printer and scanner units contribute to the motion errors. In the first stage called the uncorrected-print-error-stage, aperiodic and periodic motion behaviors are characterized in both the spatial and frequency domains. Since it is not clear how much of the error is contributed by each unit, the scanned input is statistically analyzed in the second stage called the corrected-print-error-stage. Finally, the described diagnostic algorithms output the estimated scan error and print error separately as RMS values of the displacement of the scan and print lines, respectively, from their nominal positions in the scanner or printer motion direction. We validate our test page design and approaches by ground truth obtained from a high-precision, chrome-on-glass reticle manufactured using semiconductor chip fabrication technologies.
Dynamic diagnostics of the error fields in tokamaks
NASA Astrophysics Data System (ADS)
Pustovitov, V. D.
2007-07-01
The error field diagnostics based on magnetic measurements outside the plasma is discussed. The analysed methods rely on measuring the plasma dynamic response to the finite-amplitude external magnetic perturbations, which are the error fields and the pre-programmed probing pulses. Such pulses can be created by the coils designed for static error field correction and for stabilization of the resistive wall modes, the technique developed and applied in several tokamaks, including DIII-D and JET. Here analysis is based on the theory predictions for the resonant field amplification (RFA). To achieve the desired level of the error field correction in tokamaks, the diagnostics must be sensitive to signals of several Gauss. Therefore, part of the measurements should be performed near the plasma stability boundary, where the RFA effect is stronger. While the proximity to the marginal stability is important, the absolute values of plasma parameters are not. This means that the necessary measurements can be done in the diagnostic discharges with parameters below the nominal operating regimes, with the stability boundary intentionally lowered. The estimates for ITER are presented. The discussed diagnostics can be tested in dedicated experiments in existing tokamaks. The diagnostics can be considered as an extension of the 'active MHD spectroscopy' used recently in the DIII-D tokamak and the EXTRAP T2R reversed field pinch.
Radiometric analysis of the longwave infrared channel of the Thematic Mapper on LANDSAT 4 and 5
NASA Technical Reports Server (NTRS)
Schott, John R.; Volchok, William J.; Biegel, Joseph D.
1986-01-01
The first objective was to evaluate the postlaunch radiometric calibration of the LANDSAT Thematic Mapper (TM) band 6 data. The second objective was to determine to what extent surface temperatures could be computed from the TM and 6 data using atmospheric propagation models. To accomplish this, ground truth data were compared to a single TM-4 band 6 data set. This comparison indicated satisfactory agreement over a narrow temperature range. The atmospheric propagation model (modified LOWTRAN 5A) was used to predict surface temperature values based on the radiance at the spacecraft. The aircraft data were calibrated using a multi-altitude profile calibration technique which had been extensively tested in previous studies. This aircraft calibration permitted measurement of surface temperatures based on the radiance reaching the aircraft. When these temperature values are evaluated, an error in the satellite's ability to predict surface temperatures can be estimated. This study indicated that by carefully accounting for various sensor calibration and atmospheric propagation effects, and expected error (1 standard deviation) in surface temperature would be 0.9 K. This assumes no error in surface emissivity and no sampling error due to target location. These results indicate that the satellite calibration is within nominal limits to within this study's ability to measure error.
Publication bias was not a good reason to discourage trials with low power.
Borm, George F; den Heijer, Martin; Zielhuis, Gerhard A
2009-01-01
The objective was to investigate whether it is justified to discourage trials with less than 80% power. Trials with low power are unlikely to produce conclusive results, but their findings can be used by pooling then in a meta-analysis. However, such an analysis may be biased, because trials with low power are likely to have a nonsignificant result and are less likely to be published than trials with a statistically significant outcome. We simulated several series of studies with varying degrees of publication bias and then calculated the "real" one-sided type I error and the bias of meta-analyses with a "nominal" error rate (significance level) of 2.5%. In single trials, in which heterogeneity was set at zero, low, and high, the error rates were 2.3%, 4.7%, and 16.5%, respectively. In multiple trials with 80%-90% power and a publication rate of 90% when the results were nonsignificant, the error rates could be as high as 5.1%. When the power was 50% and the publication rate of non-significant results was 60%, the error rates did not exceed 5.3%, whereas the bias was at most 15% of the difference used in the power calculation. The impact of publication bias does not warrant the exclusion of trials with 50% power.
Farooqui, Javed Hussain; Sharma, Mansi; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo
2017-01-01
The aim of this study is to compare two different methods of analysis of preoperative reference marking for toric intraocular lens (IOL) after marking with an electronic marker. Cataract and IOL Implantation Service, Shroff Eye Centre, New Delhi, India. Fifty-two eyes of thirty patients planned for toric IOL implantation were included in the study. All patients had preoperative marking performed with an electronic preoperative two-step toric IOL reference marker (ASICO AE-2929). Reference marks were placed at 3-and 9-o'clock positions. Marks were analyzed with two systems. First, slit-lamp photographs taken and analyzed using Adobe Photoshop (version 7.0). Second, Tracey iTrace Visual Function Analyzer (version 5.1.1) was used for capturing corneal topograph examination and position of marks noted. Amount of alignment error was calculated. Mean absolute rotation error was 2.38 ± 1.78° by Photoshop and 2.87 ± 2.03° by iTrace which was not statistically significant ( P = 0.215). Nearly 72.7% of eyes by Photoshop and 61.4% by iTrace had rotation error ≤3° ( P = 0.359); and 90.9% of eyes by Photoshop and 81.8% by iTrace had rotation error ≤5° ( P = 0.344). No significant difference in absolute amount of rotation between eyes when analyzed by either method. Difference in reference mark positions when analyzed by two systems suggests the presence of varying cyclotorsion at different points of time. Both analysis methods showed an approximately 3° of alignment error, which could contribute to 10% loss of astigmatic correction of toric IOL. This can be further compounded by intra-operative marking errors and final placement of IOL in the bag.
NASA Astrophysics Data System (ADS)
Roman, D. R.; Smith, D. A.
2017-12-01
In 2022, the National Geodetic Survey will replace all three NAD 83 reference frames with four new terrestrial reference frames. Each frame will be named after a tectonic plate (North American, Pacific, Caribbean and Mariana) and each will be related to the IGS frame through three Euler Pole parameters (EPPs). This talk will focus on three main areas of error propagation when defining coordinates in these four frames. Those areas are (1) use of the small angle approximation to relate true rotation about an Euler Pole to small rotations about three Cartesian axes (2) The current state of the art in determining the Euler Poles of these four plates and (3) the combination of both IGS Cartesian coordinate uncertainties and EPP uncertainties into coordinate uncertainties in the four new frames. Discussion will also include recent efforts at improving the Euler Poles for these frames and expected dates when errors in the EPPs will cause an unacceptable level of uncertainty in the four new terrestrial reference frames.
Positional reference system for ultraprecision machining
Arnold, Jones B.; Burleson, Robert R.; Pardue, Robert M.
1982-01-01
A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of position interferometers and part contour description data inputs to calculate error components for each axis of movement and output them to corresponding axis drives with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.
Positional reference system for ultraprecision machining
Arnold, J.B.; Burleson, R.R.; Pardue, R.M.
1980-09-12
A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of positions interferometers and part contour description data input to calculate error components for each axis of movement and output them to corresponding axis driven with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.
Phase-ambiguity resolution for QPSK modulation systems. Part 2: A method to resolve offset QPSK
NASA Technical Reports Server (NTRS)
Nguyen, Tien Manh
1989-01-01
Part 2 presents a new method to resolve the phase-ambiguity for Offset QPSK modulation systems. When an Offset Quaternary Phase-Shift-Keyed (OQPSK) communications link is utilized, the phase ambiguity of the reference carrier must be resolved. At the transmitter, two different unique words are separately modulated onto the quadrature carriers. At the receiver, the recovered carrier may have one of four possible phases, 0, 90, 180, or 270 degrees, referenced to the nominally correct phase. The IF portion of the channel may cause a phase-sense reversal, i.e., a reversal in the direction of phase rotation for a specified bit pattern. Hence, eight possible phase relationships (the so-called eight ambiguous phase conditions) between input and output of the demodulator must be resolved. Using the In-phase (I)/Quadrature (Q) channel reversal correcting property of an OQPSK Costas loop with integrated symbol synchronization, four ambiguous phase conditions are eliminated. Thus, only four possible ambiguous phase conditions remain. The errors caused by the remaining ambiguous phase conditions can be corrected by monitoring and detecting the polarity of the two unique words. The correction of the unique word polarities results in the complete phase-ambiguity resolution for the OQPSK system.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
Cars Thermometry in a Supersonic Combustor for CFD Code Validation
NASA Technical Reports Server (NTRS)
Cutler, A. D.; Danehy, P. M.; Springer, R. R.; DeLoach, R.; Capriotti, D. P.
2002-01-01
An experiment has been conducted to acquire data for the validation of computational fluid dynamics (CFD) codes used in the design of supersonic combustors. The primary measurement technique is coherent anti-Stokes Raman spectroscopy (CARS), although surface pressures and temperatures have also been acquired. Modern- design- of-experiment techniques have been used to maximize the quality of the data set (for the given level of effort) and minimize systematic errors. The combustor consists of a diverging duct with single downstream- angled wall injector. Nominal entrance Mach number is 2 and enthalpy nominally corresponds to Mach 7 flight. Temperature maps are obtained at several planes in the flow for two cases: in one case the combustor is piloted by injecting fuel upstream of the main injector, the second is not. Boundary conditions and uncertainties are adequately characterized. Accurate CFD calculation of the flow will ultimately require accurate modeling of the chemical kinetics and turbulence-chemistry interactions as well as accurate modeling of the turbulent mixing
The development of expertise using an intelligent computer-aided training system
NASA Technical Reports Server (NTRS)
Johnson, Debra Steele
1991-01-01
An initial examination was conducted of an Intelligent Tutoring System (ITS) developed for use in industry. The ITS, developed by NASA, simulated a satellite deployment task. More specifically, the PD (Payload Assist Module Deployment)/ICAT (Intelligent Computer Aided Training) System simulated a nominal Payload Assist Module (PAM) deployment. The development of expertise on this task was examined using three Flight Dynamics Officer (FDO) candidates who has no previous experience with this task. The results indicated that performance improved rapidly until Trial 5, followed by more gradual improvements through Trial 12. The performance dimensions measured included performance speed, actions completed, errors, help required, and display fields checked. Suggestions for further refining the software and for deciding when to expose trainees to more difficult task scenarios are discussed. Further, the results provide an initial demonstration of the effectiveness of the PD/ICAT system in training the nominal PAM deployment task and indicate the potential benefits of using ITS's for training other FDO tasks.
The development of expertise on an intelligent tutoring system
NASA Technical Reports Server (NTRS)
Johnson, Debra Steele
1989-01-01
An initial examination was conducted of an Intelligent Tutoring System (ITS) developed for use in industry. The ITS, developed by NASA, simulated a satellite deployment task. More specifically, the PD (Payload Assist Module Deployment)/ICAT (Intelligent Computer Aided Training) System simulated a nominal Payload Assist Module (PAM) deployment. The development of expertise on this task was examined using three Flight Dynamics Officer (FDO) candidates who had no previous experience with this task. The results indicated that performance improved rapidly until Trial 5, followed by more gradual improvements through Trial 12. The performance dimensions measured included performance speed, actions completed, errors, help required, and display fields checked. Suggestions for further refining the software and for deciding when to expose trainees to more difficult task scenarios are discussed. Further, the results provide an initial demonstration of the effectiveness of the PD/ICAT system in training the nominal PAM deployment task and indicate the potential benefits of using ITS's for training other FDO tasks.
NASA Technical Reports Server (NTRS)
Pei, Jing; Wall, John
2013-01-01
This paper describes the techniques involved in determining the aerodynamic stability derivatives for the frequency domain analysis of the Space Launch System (SLS) vehicle. Generally for launch vehicles, determination of the derivatives is fairly straightforward since the aerodynamic data is usually linear through a moderate range of angle of attack. However, if the wind tunnel data lacks proper corrections then nonlinearities and asymmetric behavior may appear in the aerodynamic database coefficients. In this case, computing the derivatives becomes a non-trivial task. Errors in computing the nominal derivatives could lead to improper interpretation regarding the natural stability of the system and tuning of the controller parameters, which would impact both stability and performance. The aerodynamic derivatives are also provided at off nominal operating conditions used for dispersed frequency domain Monte Carlo analysis. Finally, results are shown to illustrate that the effects of aerodynamic cross axis coupling can be neglected for the SLS configuration studied
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
Detecting Multiple Model Components with the Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Protassov, R. S.; van Dyk, D. A.
2000-05-01
The likelihood ratio test (LRT) and F-test popularized in astrophysics by Bevington (Data Reduction and Error Analysis in the Physical Sciences ) and Cash (1977, ApJ 228, 939), do not (even asymptotically) adhere to their nominal χ2 and F distributions in many statistical tests commonly used in astrophysics. The many legitimate uses of the LRT (see, e.g., the examples given in Cash (1977)) notwithstanding, it can be impossible to compute the false positive rate of the LRT or related tests such as the F-test. For example, although Cash (1977) did not suggest the LRT for detecting a line profile in a spectral model, it has become common practice despite the lack of certain required mathematical regularity conditions. Contrary to common practice, the nominal distribution of the LRT statistic should not be used in these situations. In this paper, we characterize an important class of problems where the LRT fails, show the non-standard behavior of the test in this setting, and provide a Bayesian alternative to the LRT, i.e., posterior predictive p-values. We emphasize that there are many legitimate uses of the LRT in astrophysics, and even when the LRT is inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). We illustrate this point in our analysis of GRB 970508 that was studied by Piro et al. in ApJ, 514:L73-L77, 1999.
SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ju, S; Hong, C; Kim, M
Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed withoutmore » the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.« less
Taxonomic status of Myotis occultus
Valdez, E.W.; Choate, Jerry R.; Bogan, M.A.; Yates, Terry L.
1999-01-01
The taxonomic status of the Arizona myotis (Myotis occultus) is uncertain. Although the taxon was described as a distinct species and currently is regarded as such by some authors, others have noted what they interpreted as intergradation with the little brown bat (M. lucifugus carissima) near the Colorado-New Mexico state line. In this study, we used protein electrophoresis to compare bats of these nominal taxa. We examined 20 loci from 142 specimens referable to M. occultus and M. lucifugus from New Mexico, Colorado, and Wyoming. Nine of the 20 loci were polymorphic. Results show that there were high similarities among samples, no fixed alleles, and minor divergence from Hardy-Weinberg equilibrium. Our results suggest that the two nominal taxa represent only one species and that M. occultus should be regarded as a subspecies of M. lucifugus.
Wu, Mixia; Zhang, Dianchen; Liu, Aiyi
2016-01-01
New biomarkers continue to be developed for the purpose of diagnosis, and their diagnostic performances are typically compared with an existing reference biomarker used for the same purpose. Considerable amounts of research have focused on receiver operating characteristic curves analysis when the reference biomarker is dichotomous. In the situation where the reference biomarker is measured on a continuous scale and dichotomization is not practically appealing, an index was proposed in the literature to measure the accuracy of a continuous biomarker, which is essentially a linear function of the popular Kendall's tau. We consider the issue of estimating such an accuracy index when the continuous reference biomarker is measured with errors. We first investigate the impact of measurement errors on the accuracy index, and then propose methods to correct for the bias due to measurement errors. Simulation results show the effectiveness of the proposed estimator in reducing biases. The methods are exemplified with hemoglobin A1c measurements obtained from both the central lab and a local lab to evaluate the accuracy of the mean data obtained from the metered blood glucose monitoring against the centrally measured hemoglobin A1c from a behavioral intervention study for families of youth with type 1 diabetes.
Instrument Pointing Capabilities: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Blackmore, Lars; Murray, Emmanuell; Scharf, Daniel P.; Aung, Mimi; Bayard, David; Brugarolas, Paul; Hadaegh, Fred; Lee, Allan; Milman, Mark; Sirlin, Sam;
2011-01-01
This paper surveys the instrument pointing capabilities of past, present and future space telescopes and interferometers. As an important aspect of this survey, we present a taxonomy for "apples-to-apples" comparisons of pointing performances. First, pointing errors are defined relative to either an inertial frame or a celestial target. Pointing error can then be further sub-divided into DC, that is, steady state, and AC components. We refer to the magnitude of the DC error relative to the inertial frame as absolute pointing accuracy, and we refer to the magnitude of the DC error relative to a celestial target as relative pointing accuracy. The magnitude of the AC error is referred to as pointing stability. While an AC/DC partition is not new, we leverage previous work by some of the authors to quantitatively clarify and compare varying definitions of jitter and time window averages. With this taxonomy and for sixteen past, present, and future missions, pointing accuracies and stabilities, both required and achieved, are presented. In addition, we describe the attitude control technologies used to and, for future missions, planned to achieve these pointing performances.
Geological nominations at UNESCO World Heritage, an upstream struggle
NASA Astrophysics Data System (ADS)
Olive-Garcia, Cécile; van Wyk de Vries, Benjamin
2017-04-01
Using my 10 years experience in setting up and defending a UNESCO world Heritage Geological nomination, this presentation aims to give a personal insight into this international process and the differential use of science, subjective perception (aesthetic and 'naturality'), and politics. At this point in the process, new protocols have been tested in order to improve the dialogue, accountability and transparency between the different stake-holders. These are, the State parties, the IUCN, the scientific community, and UNESCO itself. Our proposal is the Chaîne des Puys-Limagne fault ensemble, which combines tectonic, geomorphological evolution and volcanology. The project's essence is a conjunction of inseparable geological features and processes, set in the context of plate tectonics. This very unicit yof diverse forms and processes creates the value of the site. However, it is just this that has caused a problem, as the advisory body has a categorical approach of nominations that separates items to assess them in an unconnected manner.From the start we proposed a combined approach, where a property is seen in its entirety, and the constituent elements seen as interlinked elements reflecting the joint underlying phenomena. At this point, our project has received the first ever open review by an independent technical mission (jointly set up by IUCN, UNESCO and the State party). The subsequent report was broadly supportive of the project's approach and of the value of the ensemble of features. The UNESCO committee in 2016, re-referred the nomination, acknowledging the potential Outstanding Universal Value of the site and requesting the parties to continue the upstream process (e.g. collaborative work), notably on the recommendations and conclusions of the Independent Technical mission report. Meetings are continuing, and I shall provide you with the hot-off-the-press news as this ground breaking nomination progresses.
Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.
ERIC Educational Resources Information Center
Hoppe, H. Ulrich
1994-01-01
Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)
Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System
NASA Technical Reports Server (NTRS)
Pfenninger, W. Matthew; Papen, George C.
1992-01-01
Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.
Photochemical control of the distribution of Venusian water
NASA Astrophysics Data System (ADS)
Parkinson, Christopher D.; Gao, Peter; Esposito, Larry; Yung, Yuk; Bougher, Stephen; Hirtzig, Mathieu
2015-08-01
We use the JPL/Caltech 1-D photochemical model to solve continuity diffusion equation for atmospheric constituent abundances and total number density as a function of radial distance from the planet Venus. Photochemistry of the Venus atmosphere from 58 to 112 km is modeled using an updated and expanded chemical scheme (Zhang et al., 2010, 2012), guided by the results of recent observations and we mainly follow these references in our choice of boundary conditions for 40 species. We model water between 10 and 35 ppm at our 58 km lower boundary using an SO2 mixing ratio of 25 ppm as our nominal reference value. We then vary the SO2 mixing ratio at the lower boundary between 5 and 75 ppm holding water mixing ratio of 18 ppm at the lower boundary and finding that it can control the water distribution at higher altitudes. SO2 and H2O can regulate each other via formation of H2SO4. In regions of high mixing ratios of SO2 there exists a "runaway effect" such that SO2 gets oxidized to SO3, which quickly soaks up H2O causing a major depletion of water between 70 and 100 km. Eddy diffusion sensitivity studies performed characterizing variability due to mixing that show less of an effect than varying the lower boundary mixing ratio value. However, calculations using our nominal eddy diffusion profile multiplied and divided by a factor of four can give an order of magnitude maximum difference in the SO2 mixing ratio and a factor of a few difference in the H2O mixing ratio when compared with the respective nominal mixing ratio for these two species. In addition to explaining some of the observed variability in SO2 and H2O on Venus, our work also sheds light on the observations of dark and bright contrasts at the Venus cloud tops observed in an ultraviolet spectrum. Our calculations produce results in agreement with the SOIR Venus Express results of 1 ppm at 70-90 km (Bertaux et al., 2007) by using an SO2 mixing ratio of 25 ppm SO2 and 18 ppm water as our nominal reference values. Timescales for a chemical bifurcation causing a collapse of water concentrations above the cloud tops (>64 km) are relatively short and on the order of a less than a few months, decreasing with altitude to less than a few days.
A Singular Perturbation Approach for Time-Domain Assessment of Phase Margin
NASA Technical Reports Server (NTRS)
Zhu, J. Jim; Yang, Xiaojing; Hodel, A Scottedward
2010-01-01
This paper considers the problem of time-domain assessment of the Phase Margin (PM) of a Single Input Single Output (SISO) Linear Time-Invariant (LTI) system using a singular perturbation approach, where a SISO LTI fast loop system, whose phase lag increases monotonically with frequency, is introduced into the loop as a singular perturbation with a singular perturbation (time-scale separation) parameter Epsilon. First, a bijective relationship between the Singular Perturbation Margin (SPM) max and the PM of the nominal (slow) system is established with an approximation error on the order of Epsilon(exp 2). In proving this result, relationships between the singular perturbation parameter Epsilon, PM of the perturbed system, PM and SPM of the nominal system, and the (monotonically increasing) phase of the fast system are also revealed. These results make it possible to assess the PM of the nominal system in the time-domain for SISO LTI systems using the SPM with a standardized testing system called "PM-gauge," as demonstrated by examples. PM is a widely used stability margin for LTI control system design and certification. Unfortunately, it is not applicable to Linear Time-Varying (LTV) and Nonlinear Time-Varying (NLTV) systems. The approach developed here can be used to establish a theoretical as well as practical metric of stability margin for LTV and NLTV systems using a standardized SPM that is backward compatible with PM.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Test Design Optimization in CAT Early Stage with the Nominal Response Model
ERIC Educational Resources Information Center
Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E.
2007-01-01
The early stage of computerized adaptive testing (CAT) refers to the phase of the trait estimation during the administration of only a few items. This phase can be characterized by bias and instability of estimation. In this study, an item selection criterion is introduced in an attempt to lessen this instability: the D-optimality criterion. A…
NASA Technical Reports Server (NTRS)
Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.
1995-01-01
In this volume, volume 4 (of five volumes), the discussion is focussed on the system models and related data references and has the following subsections: space shuttle main engine, integrated solid rocket booster, orbiter auxiliary power units/hydraulics, and electrical power system.
40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... Research Triangle Park, N.C. May 1982. 3. “Method 9—Visible Determination of the Opacity of Emissions from... other dust generating operations which have been terminated for over eight months. (3) The test methods... than or equal to a nominal 10 micrometers as measured by reference or equivalent methods that meet the...
40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... Research Triangle Park, N.C. May 1982. 3. “Method 9—Visible Determination of the Opacity of Emissions from... other dust generating operations which have been terminated for over eight months. (3) The test methods... than or equal to a nominal 10 micrometers as measured by reference or equivalent methods that meet the...
40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... Research Triangle Park, N.C. May 1982. 3. “Method 9—Visible Determination of the Opacity of Emissions from... other dust generating operations which have been terminated for over eight months. (3) The test methods... than or equal to a nominal 10 micrometers as measured by reference or equivalent methods that meet the...
Digital adaptive controllers for VTOL vehicles. Volume 2: Software documentation
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.; Pratt, S. G.
1979-01-01
The VTOL approach and landing test (VALT) adaptive software is documented. Two self-adaptive algorithms, one based on an implicit model reference design and the other on an explicit parameter estimation technique were evaluated. The organization of the software, user options, and a nominal set of input data are presented along with a flow chart and program listing of each algorithm.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
... managing agencies to fill key conservation gaps in important ocean areas. DATES: Comments on the... conservation objectives of the Framework. Executive Order 13158 defines an MPA as: ``any area of the marine... term MPA as defined in the Framework refers only to the marine portion of a site (below the mean high...
78 FR 14799 - Solicitation of Nominations to the Presidential Advisory Council on HIV/AIDS
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-07
... Council on HIV/AIDS AGENCY: Office of the Assistant Secretary for Health, Office of the Secretary... Service Act (42 U.S.C. 217a. The Presidential Advisory Council on HIV/AIDS (referred to as PACHA and/or... as members of the Presidential Advisory Council on HIV/AIDS (PACHA). The PACHA is a federal advisory...
Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kohen, Hamid
1997-01-01
This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.
Importance of interpolation and coincidence errors in data fusion
NASA Astrophysics Data System (ADS)
Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana
2018-02-01
The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.
Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W
2015-08-01
This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.
Pightling, Arthur W.; Petronella, Nicholas; Pagotto, Franco
2014-01-01
The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should test a variety of conditions to achieve optimal results. PMID:25144537
The Influence of the Terrestrial Reference Frame on Studies of Sea Level Change
NASA Astrophysics Data System (ADS)
Nerem, R. S.; Bar-Sever, Y. E.; Haines, B. J.; Desai, S.; Heflin, M. B.
2015-12-01
The terrestrial reference frame (TRF) provides the foundation for the accurate monitoring of sea level using both ground-based (tide gauges) and space-based (satellite altimetry) techniques. For the latter, tide gauges are also used to monitor drifts in the satellite instruments over time. The accuracy of the terrestrial reference frame (TRF) is thus a critical component for both types of sea level measurements. The TRF is central to the formation of geocentric sea-surface height (SSH) measurements from satellite altimeter data. The computed satellite orbits are linked to a particular TRF via the assumed locations of the ground-based tracking systems. The manner in which TRF errors are expressed in the orbit solution (and thus SSH) is not straightforward, and depends on the models of the forces underlying the satellite's motion. We discuss this relationship, and provide examples of the systematic TRF-induced errors in the altimeter derived sea-level record. The TRF is also crucial to the interpretation of tide-gauge measurements, as it enables the separation of vertical land motion from volumetric changes in the water level. TRF errors affect tide gauge measurements through GNSS estimates of the vertical land motion at each tide gauge. This talk will discuss the current accuracy of the TRF and how errors in the TRF impact both satellite altimeter and tide gauge sea level measurements. We will also discuss simulations of how the proposed Geodetic Reference Antenna in SPace (GRASP) satellite mission could reduce these errors and revolutionize how reference frames are computed in general.
NASA Technical Reports Server (NTRS)
VanZwieten, Tannen; Zhu, J. Jim; Adami, Tony; Berry, Kyle; Grammar, Alex; Orr, Jeb S.; Best, Eric A.
2014-01-01
Recently, a robust and practical adaptive control scheme for launch vehicles [ [1] has been introduced. It augments a classical controller with a real-time loop-gain adaptation, and it is therefore called Adaptive Augmentation Control (AAC). The loop-gain will be increased from the nominal design when the tracking error between the (filtered) output and the (filtered) command trajectory is large; whereas it will be decreased when excitation of flex or sloshing modes are detected. There is a need to determine the range and rate of the loop-gain adaptation in order to retain (exponential) stability, which is critical in vehicle operation, and to develop some theoretically based heuristic tuning methods for the adaptive law gain parameters. The classical launch vehicle flight controller design technics are based on gain-scheduling, whereby the launch vehicle dynamics model is linearized at selected operating points along the nominal tracking command trajectory, and Linear Time-Invariant (LTI) controller design techniques are employed to ensure asymptotic stability of the tracking error dynamics, typically by meeting some prescribed Gain Margin (GM) and Phase Margin (PM) specifications. The controller gains at the design points are then scheduled, tuned and sometimes interpolated to achieve good performance and stability robustness under external disturbances (e.g. winds) and structural perturbations (e.g. vehicle modeling errors). While the GM does give a bound for loop-gain variation without losing stability, it is for constant dispersions of the loop-gain because the GM is based on frequency-domain analysis, which is applicable only for LTI systems. The real-time adaptive loop-gain variation of the AAC effectively renders the closed-loop system a time-varying system, for which it is well-known that the LTI system stability criterion is neither necessary nor sufficient when applying to a Linear Time-Varying (LTV) system in a frozen-time fashion. Therefore, a generalized stability metric for time-varying loop=gain perturbations is needed for the AAC.
A dynamic system matching technique for improving the accuracy of MEMS gyroscopes
NASA Astrophysics Data System (ADS)
Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.
2014-12-01
A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.
Mission Design, Guidance, and Navigation of a Callisto-Io-Ganymede Triple Flyby Jovian Capture
NASA Astrophysics Data System (ADS)
Didion, Alan M.
Use of a triple-satellite-aided capture maneuver to enter Jovian orbit reduces insertion DeltaV and provides close flyby science opportunities at three of Jupiter's four large Galilean moons. This capture can be performed while maintaining appropriate Jupiter standoff distance and setting up a suitable apojove for plotting an extended tour. This paper has three main chapters, the first of which discusses the design and optimization of a triple-flyby capture trajectory. A novel triple-satellite-aided capture uses sequential flybys of Callisto, Io, and Ganymede to reduce the DeltaV required to capture into orbit about Jupiter. An optimal broken-plane maneuver is added between Earth and Jupiter to form a complete chemical/impulsive interplanetary trajectory from Earth to Jupiter. Such a trajectory can yield significant fuel savings over single and double-flyby capture schemes while maintaining a brief and simple interplanetary transfer phase. The second chapter focuses on the guidance and navigation of such trajectories in the presence of spacecraft navigation errors, ephemeris errors, and maneuver execution errors. A powered-flyby trajectory correction maneuver (TCM) is added to the nominal trajectory at Callisto and the nominal Jupiter orbit insertion (JOI) maneuver is modified to both complete the capture and target the Ganymede flyby. A third TCM is employed after all the flybys to act as a JOI cleanup maneuver. A Monte Carlo simulation shows that the statistical DeltaV required to correct the trajectory is quite manageable and the flyby characteristics are very consistent. The developed methods maintain flexibility for adaptation to similar launch, cruise, and capture conditions. The third chapter details the methodology and results behind a completely separate project to design and optimize an Earth-orbiting three satellite constellation to perform very long baseline interferometry (VLBI) as part of the 8th annual Global Trajectory Optimisation Competition (GTOC8). A script is designed to simulate the prescribed constellation and record its observations; the observations made are scored according to a provided performance index.
A dynamic system matching technique for improving the accuracy of MEMS gyroscopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stubberud, Peter A., E-mail: stubber@ee.unlv.edu; Stubberud, Stephen C., E-mail: scstubberud@ieee.org; Stubberud, Allen R., E-mail: stubberud@att.net
A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This canmore » be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, J.; Assoufid, L.; Macrander, A.
2007-01-01
Long trace profilers (LTPS) have been used at many synchrotron radiation laboratories worldwide for over a decade to measure surface slope profiles of long grazing incidence x-ray mirrors. Phase measuring interferometers (PMIs) of the Fizeau type, on the other hand, are being used by most mirror manufacturers to accomplish the same task. However, large mirrors whose dimensions exceed the aperture of the Fizeau interferometer require measurements to be carried out at grazing incidence, and aspheric optics require the use of a null lens. While an LTP provides a direct measurement of ID slope profiles, PMIs measure area height profiles frommore » which the slope can be obtained by a differentiation algorithm. Measurements of the two types of instruments have been found by us to be in good agreement, but to our knowledge there is no published work directly comparing the two instruments. This paper documents that comparison. We measured two different nominally flat mirrors with both the LTP in operation at the Advanced Photon Source (a type-II LTP) and a Fizeau-type PMI interferometer (Wyko model 6000). One mirror was 500 mm long and made of Zerodur, and the other mirror was 350 mm long and made of silicon. Slope error results with these instruments agree within nearly 100% (3.11 {+-} 0.15 {micro}rad for the LTP, and 3.11 {+-} 0.02 {micro}rad for the Fizeau PMI interferometer) for the medium quality Zerodur mirror with 3 {micro}rad rms nominal slope error. A significant difference was observed with the much higher quality silicon mirror. For the Si mirror, slope error data is 0.39 {+-} 0.08 {micro}rad from LTP measurements but it is 0.35 {+-} 0.01 {micro}rad from PMI interferometer measurements. The standard deviations show that the Fizeau PMI interferometer has much better measurement repeatability.« less
Sommargren, Gary E.; Campbell, Eugene W.
2004-03-09
To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second, measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.
Sommargren, Gary E.; Campbell, Eugene W.
2005-06-21
To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.
Self-correcting electronically scanned pressure sensor
NASA Technical Reports Server (NTRS)
Gross, C. (Inventor)
1983-01-01
A multiple channel high data rate pressure sensing device is disclosed for use in wind tunnels, spacecraft, airborne, process control, automotive, etc., pressure measurements. Data rates in excess of 100,000 measurements per second are offered with inaccuracies from temperature shifts less than 0.25% (nominal) of full scale over a temperature span of 55 C. The device consists of thirty-two solid state sensors, signal multiplexing electronics to electronically address each sensor, and digital electronic circuitry to automatically correct the inherent thermal shift errors of the pressure sensors and their associated electronics.
Entry flight control system downmoding evaluation
NASA Technical Reports Server (NTRS)
Barnes, H. A.
1978-01-01
A method to desensitize the entry flight control system to structural vibration feedback which might induce an oscillatory instability is described. Trends in vehicle response and handling characteristics as a function of gain combinations in the FCS forward and rate feedback loops were described as observed in a man-in-the-loop simulation. Among the flight conditions considered are the effects of downmoding with APU failures, off-nominal trajectory conditions, sensed angle of attack errors, the impact on RCS fuel consumption, performance in the presence of aero variations, recovery from large FCS upsets, and default gains.
Searching for the Final Answer: Factors Contributing to Medication Administration Errors.
ERIC Educational Resources Information Center
Pape, Tess M.
2001-01-01
Causal factors contributing to errors in medication administration should be thoroughly investigated, focusing on systems rather than individual nurses. Unless systemic causes are addressed, many errors will go unreported for fear of reprisal. (Contains 42 references.) (SK)
NASA Astrophysics Data System (ADS)
Kung, Wei-Ying; Kim, Chang-Su; Kuo, C.-C. Jay
2004-10-01
A multi-hypothesis motion compensated prediction (MHMCP) scheme, which predicts a block from a weighted superposition of more than one reference blocks in the frame buffer, is proposed and analyzed for error resilient visual communication in this research. By combining these reference blocks effectively, MHMCP can enhance the error resilient capability of compressed video as well as achieve a coding gain. In particular, we investigate the error propagation effect in the MHMCP coder and analyze the rate-distortion performance in terms of the hypothesis number and hypothesis coefficients. It is shown that MHMCP suppresses the short-term effect of error propagation more effectively than the intra refreshing scheme. Simulation results are given to confirm the analysis. Finally, several design principles for the MHMCP coder are derived based on the analytical and experimental results.
Omar, Hazim; Ahmad, Alwani Liyan; Hayashi, Noburo; Idris, Zamzuri; Abdullah, Jafri Malin
2015-12-01
Magnetoencephalography (MEG) has been extensively used to measure small-scale neuronal brain activity. Although it is widely acknowledged as a sensitive tool for deciphering brain activity and source localisation, the accuracy of the MEG system must be critically evaluated. Typically, on-site calibration with the provided phantom (Local phantom) is used. However, this method is still questionable due to the uncertainty that may originate from the phantom itself. Ideally, the validation of MEG data measurements would require cross-site comparability. A simple method of phantom testing was used twice in addition to a measurement taken with a calibrated reference phantom (RefPhantom) obtained from Elekta Oy of Helsinki, Finland. The comparisons of two main aspects were made in terms of the dipole moment (Qpp) and the difference in the dipole distance from the origin (d) after the tests of statistically equal means and variance were confirmed. The result of Qpp measurements for the LocalPhantom and RefPhantom were 978 (SD24) nAm and 988 (SD32) nAm, respectively, and were still optimally within the accepted range of 900 to 1100 nAm. Moreover, the shifted d results for the LocalPhantom and RefPhantom were 1.84 mm (SD 0.53) and 2.14 mm (SD 0.78), respectively, and these values were below the maximum acceptance range of within 5.0 mm of the nominal dipole location. The Local phantom seems to outperform the reference phantom as indicated by the small standard error of the former (SE 0.094) compared with the latter (SE 0.138). The result indicated that HUSM MEG system was in excellent working condition in terms of the dipole magnitude and localisation measurements as these values passed the acceptance limits criteria of the phantom test.
Lobben, Marit; Bochynska, Agata
2018-03-01
Grammatical categories represent implicit knowledge, and it is not known if such abstract linguistic knowledge can be continuously grounded in real-life experiences, nor is it known what types of mental states can be simulated. A former study showed that attention bias in peripersonal space (PPS) affects reaction times in grammatical congruency judgments of nominal classifiers, suggesting that simulated semantics may include reenactment of attention. In this study, we contrasted a Chinese nominal classifier used with nouns denoting pinch grip objects with a classifier for nouns with big object referents in a pupil dilation experiment. Twenty Chinese native speakers read grammatical and ungrammatical classifier-noun combinations and made grammaticality judgment while their pupillary responses were measured. It was found that their pupils dilated significantly more to the pinch grip classifier than to the big object classifier, indicating attention simulation in PPS. Pupil dilations were also significantly larger with congruent trials on the whole than in incongruent trials, but crucially, congruency and classifier semantics were independent of each other. No such effects were found in controls. Copyright © 2017 Cognitive Science Society, Inc.
Influence of OPD in wavelength-shifting interferometry
NASA Astrophysics Data System (ADS)
Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan
2009-12-01
Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.
Influence of OPD in wavelength-shifting interferometry
NASA Astrophysics Data System (ADS)
Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan
2010-03-01
Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.
Automatic Alignment of Displacement-Measuring Interferometer
NASA Technical Reports Server (NTRS)
Halverson, Peter; Regehr, Martin; Spero, Robert; Alvarez-Salazar, Oscar; Loya, Frank; Logan, Jennifer
2006-01-01
A control system strives to maintain the correct alignment of a laser beam in an interferometer dedicated to measuring the displacement or distance between two fiducial corner-cube reflectors. The correct alignment of the laser beam is parallel to the line between the corner points of the corner-cube reflectors: Any deviation from parallelism changes the length of the optical path between the reflectors, thereby introducing a displacement or distance measurement error. On the basis of the geometrical optics of corner-cube reflectors, the length of the optical path can be shown to be L = L(sub 0)cos theta, where L(sub 0) is the distance between the corner points and theta is the misalignment angle. Therefore, the measurement error is given by DeltaL = L(sub 0)(cos theta - 1). In the usual case in which the misalignment is small, this error can be approximated as DeltaL approximately equal to -L(sub 0)theta sup 2/2. The control system (see figure) is implemented partly in hardware and partly in software. The control system includes three piezoelectric actuators for rapid, fine adjustment of the direction of the laser beam. The voltages applied to the piezoelectric actuators include components designed to scan the beam in a circular pattern so that the beam traces out a narrow cone (60 microradians wide in the initial application) about the direction in which it is nominally aimed. This scan is performed at a frequency (2.5 Hz in the initial application) well below the resonance frequency of any vibration of the interferometer. The laser beam makes a round trip to both corner-cube reflectors and then interferes with the launched beam. The interference is detected on a photodiode. The length of the optical path is measured by a heterodyne technique: A 100- kHz frequency shift between the launched beam and a reference beam imposes, on the detected signal, an interferometric phase shift proportional to the length of the optical path. A phase meter comprising analog filters and specialized digital circuitry converts the phase shift to an indication of displacement, generating a digital signal proportional to the path length.
Using explanatory crop models to develop simple tools for Advanced Life Support system studies
NASA Technical Reports Server (NTRS)
Cavazzoni, J.
2004-01-01
System-level analyses for Advanced Life Support require mathematical models for various processes, such as for biomass production and waste management, which would ideally be integrated into overall system models. Explanatory models (also referred to as mechanistic or process models) would provide the basis for a more robust system model, as these would be based on an understanding of specific processes. However, implementing such models at the system level may not always be practicable because of their complexity. For the area of biomass production, explanatory models were used to generate parameters and multivariable polynomial equations for basic models that are suitable for estimating the direction and magnitude of daily changes in canopy gas-exchange, harvest index, and production scheduling for both nominal and off-nominal growing conditions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.
Detecting grouting quality of tendon ducts using the impact-echo method
NASA Astrophysics Data System (ADS)
Qu, Guangzhen; Sun, Min; Zhou, Guangli
2018-06-01
The performance, durability and safety of prestressed concrete bridge were directly affected by the compaction of prestressed pipe. However, the pipe was hidden in the beam, and its grouting density was difficult to detect. The paper had modified three different status of gouting quality through making test model. After that, the impact-Echo method was adopted to detect the grouting quality of tendon ducts, the study was sunmmarized as follow. If the reflect time of slab bottom and nominal thickness of slab increased, the degree of density will increase; testing from half-hole of web, the reflect time and nominal thickness of slab was biggest. At the same time, the reflect time of compacted and uncompacted tendon ducts were mainly. At last, the method was verified by the engineering project, which provided reference value.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
Hühn, M; Piepho, H P
2003-03-01
Tests for linkage are usually performed using the lod score method. A critical question in linkage analyses is the choice of sample size. The appropriate sample size depends on the desired type-I error and power of the test. This paper investigates the exact type-I error and power of the lod score method in a segregating F(2) population with co-dominant markers and a qualitative monogenic dominant-recessive trait. For illustration, a disease-resistance trait is considered, where the susceptible allele is recessive. A procedure is suggested for finding the appropriate sample size. It is shown that recessive plants have about twice the information content of dominant plants, so the former should be preferred for linkage detection. In some cases the exact alpha-values for a given nominal alpha may be rather small due to the discrete nature of the sampling distribution in small samples. We show that a gain in power is possible by using exact methods.
For a new look at 'lexical errors': evidence from semantic approximations with verbs in aphasia.
Duvignau, Karine; Tran, Thi Mai; Manchon, Mélanie
2013-08-01
The ability to understand the similarity between two phenomena is fundamental for humans. Designated by the term analogy in psychology, this ability plays a role in the categorization of phenomena in the world and in the organisation of the linguistic system. The use of analogy in language often results in non-standard utterances, particularly in speakers with aphasia. These non-standard utterances are almost always studied in a nominal context and considered as errors. We propose a study of the verbal lexicon and present findings that measure, by an action-video naming task, the importance of verb-based non-standard utterances made by 17 speakers with aphasia ("la dame déshabille l'orange"/the lady undresses the orange, "elle casse la tomate"/she breaks the tomato). The first results we have obtained allow us to consider these type of utterances from a new perspective: we propose to eliminate the label of "error", suggesting that they may be viewed as semantic approximations based upon a relationship of inter-domain synonymy and are ingrained in the heart of the lexical system.
Role of color memory in successive color constancy.
Ling, Yazhu; Hurlbert, Anya
2008-06-01
We investigate color constancy for real 2D paper samples using a successive matching paradigm in which the observer memorizes a reference surface color under neutral illumination and after a temporal interval selects a matching test surface under the same or different illumination. We find significant effects of the illumination, reference surface, and their interaction on the matching error. We characterize the matching error in the absence of illumination change as the "pure color memory shift" and introduce a new index for successive color constancy that compares this shift against the matching error under changing illumination. The index also incorporates the vector direction of the matching errors in chromaticity space, unlike the traditional constancy index. With this index, we find that color constancy is nearly perfect.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.
Process for computing geometric perturbations for probabilistic analysis
Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX
2012-04-10
A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.
NASA Technical Reports Server (NTRS)
Altunin, V.; Alekseev, V.; Akim, E.; Eubanks, M.; Kingham, K.; Treuhaft, R.; Sukhanov, K.
1995-01-01
A proposed new space radio astronomy mission for astrometry is described. The Astrometry VLBI (very long baseline) in Space (AVS) nominal mission includes two identical spacecraft, each with a 4-m antenna sending data to a 70-m ground station. The goals of AVS are improving astrometry accuracy to the microarcsecond level and improving the accuracy of the transformation between the inertial radio and optical coordinate reference frames.
The Calibration of Gloss Reference Standards
NASA Astrophysics Data System (ADS)
Budde, W.
1980-04-01
In present international and national standards for the measurement of specular gloss the primary and secondary reference standards are defined for monochromatic radiation. However the glossmeter specified is using polychromatic radiation (CIE Standard Illuminant C) and the CIE Standard Photometric Observer. This produces errors in practical gloss measurements of up to 0.5%. Although this may be considered small as compared to the accuracy of most practical gloss measurements, such an error should not be tolerated in the calibration of secondary standards. Corrections for such errors are presented and various alternatives for amendments of the existing documentary standards are discussed.
Effect of defuzzification method of fuzzy modeling
NASA Astrophysics Data System (ADS)
Lapohos, Tibor; Buchal, Ralph O.
1994-10-01
Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.
Farooqui, Javed Hussain; Sharma, Mansi; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo
2017-01-01
PURPOSE: The aim of this study is to compare two different methods of analysis of preoperative reference marking for toric intraocular lens (IOL) after marking with an electronic marker. SETTING/VENUE: Cataract and IOL Implantation Service, Shroff Eye Centre, New Delhi, India. PATIENTS AND METHODS: Fifty-two eyes of thirty patients planned for toric IOL implantation were included in the study. All patients had preoperative marking performed with an electronic preoperative two-step toric IOL reference marker (ASICO AE-2929). Reference marks were placed at 3-and 9-o'clock positions. Marks were analyzed with two systems. First, slit-lamp photographs taken and analyzed using Adobe Photoshop (version 7.0). Second, Tracey iTrace Visual Function Analyzer (version 5.1.1) was used for capturing corneal topograph examination and position of marks noted. Amount of alignment error was calculated. RESULTS: Mean absolute rotation error was 2.38 ± 1.78° by Photoshop and 2.87 ± 2.03° by iTrace which was not statistically significant (P = 0.215). Nearly 72.7% of eyes by Photoshop and 61.4% by iTrace had rotation error ≤3° (P = 0.359); and 90.9% of eyes by Photoshop and 81.8% by iTrace had rotation error ≤5° (P = 0.344). No significant difference in absolute amount of rotation between eyes when analyzed by either method. CONCLUSIONS: Difference in reference mark positions when analyzed by two systems suggests the presence of varying cyclotorsion at different points of time. Both analysis methods showed an approximately 3° of alignment error, which could contribute to 10% loss of astigmatic correction of toric IOL. This can be further compounded by intra-operative marking errors and final placement of IOL in the bag. PMID:28757694
An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan
2008-01-01
This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Patient identification using a near-infrared laser scanner
NASA Astrophysics Data System (ADS)
Manit, Jirapong; Bremer, Christina; Schweikard, Achim; Ernst, Floris
2017-03-01
We propose a new biometric approach where the tissue thickness of a person's forehead is used as a biometric feature. Given that the spatial registration of two 3D laser scans of the same human face usually produces a low error value, the principle of point cloud registration and its error metric can be applied to human classification techniques. However, by only considering the spatial error, it is not possible to reliably verify a person's identity. We propose to use a novel near-infrared laser-based head tracking system to determine an additional feature, the tissue thickness, and include this in the error metric. Using MRI as a ground truth, data from the foreheads of 30 subjects was collected from which a 4D reference point cloud was created for each subject. The measurements from the near-infrared system were registered with all reference point clouds using the ICP algorithm. Afterwards, the spatial and tissue thickness errors were extracted, forming a 2D feature space. For all subjects, the lowest feature distance resulted from the registration of a measurement and the reference point cloud of the same person. The combined registration error features yielded two clusters in the feature space, one from the same subject and another from the other subjects. When only the tissue thickness error was considered, these clusters were less distinct but still present. These findings could help to raise safety standards for head and neck cancer patients and lays the foundation for a future human identification technique.
ERIC Educational Resources Information Center
Sun, Wei; And Others
1992-01-01
Identifies types and distributions of errors in text produced by optical character recognition (OCR) and proposes a process using machine learning techniques to recognize and correct errors in OCR texts. Results of experiments indicating that this strategy can reduce human interaction required for error correction are reported. (25 references)…
Reliability issues in human brain temperature measurement
2009-01-01
Introduction The influence of brain temperature on clinical outcome after severe brain trauma is currently poorly understood. When brain temperature is measured directly, different values between the inside and outside of the head can occur. It is not yet clear if these differences are 'real' or due to measurement error. Methods The aim of this study was to assess the performance and measurement uncertainty of body and brain temperature sensors currently in use in neurocritical care. Two organic fixed-point, ultra stable temperature sources were used as the temperature references. Two different types of brain sensor (brain type 1 and brain type 2) and one body type sensor were tested under rigorous laboratory conditions and at the bedside. Measurement uncertainty was calculated using internationally recognised methods. Results Average differences between the 26°C reference temperature source and the clinical temperature sensors were +0.11°C (brain type 1), +0.24°C (brain type 2) and -0.15°C (body type), respectively. For the 36°C temperature reference source, average differences between the reference source and clinical thermometers were -0.02°C, +0.09°C and -0.03°C for brain type 1, brain type 2 and body type sensor, respectively. Repeat calibrations the following day confirmed that these results were within the calculated uncertainties. The results of the immersion tests revealed that the reading of the body type sensor was sensitive to position, with differences in temperature of -0.5°C to -1.4°C observed on withdrawing the thermometer from the base of the isothermal environment by 4 cm and 8 cm, respectively. Taking into account all the factors tested during the calibration experiments, the measurement uncertainty of the clinical sensors against the (nominal) 26°C and 36°C temperature reference sources for the brain type 1, brain type 2 and body type sensors were ± 0.18°C, ± 0.10°C and ± 0.12°C respectively. Conclusions The results show that brain temperature sensors are fundamentally accurate and the measurements are precise to within 0.1 to 0.2°C. Subtle dissociation between brain and body temperature in excess of 0.1 to 0.2°C is likely to be real. Body temperature sensors need to be secured in position to ensure that measurements are reliable. PMID:19573241
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less
Neural network-based model reference adaptive control system.
Patino, H D; Liu, D
2000-01-01
In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen
2003-01-01
The Neural Flight Control System (NFCS) was developed to address the need for control systems that can be produced and tested at lower cost, easily adapted to prototype vehicles and for flight systems that can accommodate damaged control surfaces or changes to aircraft stability and control characteristics resulting from failures or accidents. NFCS utilizes on a neural network-based flight control algorithm which automatically compensates for a broad spectrum of unanticipated damage or failures of an aircraft in flight. Pilot stick and rudder pedal inputs are fed into a reference model which produces pitch, roll and yaw rate commands. The reference model frequencies and gains can be set to provide handling quality characteristics suitable for the aircraft of interest. The rate commands are used in conjunction with estimates of the aircraft s stability and control (S&C) derivatives by a simplified Dynamic Inverse controller to produce virtual elevator, aileron and rudder commands. These virtual surface deflection commands are optimally distributed across the aircraft s available control surfaces using linear programming theory. Sensor data is compared with the reference model rate commands to produce an error signal. A Proportional/Integral (PI) error controller "winds up" on the error signal and adds an augmented command to the reference model output with the effect of zeroing the error signal. In order to provide more consistent handling qualities for the pilot, neural networks learn the behavior of the error controller and add in the augmented command before the integrator winds up. In the case of damage sufficient to affect the handling qualities of the aircraft, an Adaptive Critic is utilized to reduce the reference model frequencies and gains to stay within a flyable envelope of the aircraft.
Poster - 53: Improving inter-linac DMLC IMRT dose precision by fine tuning of MLC leaf calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakonechny, Keith; Tran, Muoi; Sasaki, David
Purpose: To develop a method to improve the inter-linac precision of DMLC IMRT dosimetry. Methods: The distance between opposing MLC leaf banks (“gap size”) can be finely tuned on Varian linacs. The dosimetric effect due to small deviations from the nominal gap size (“gap error”) was studied by introducing known errors for several DMLC sliding gap sizes, and for clinical plans based on the TG119 test cases. The plans were delivered on a single Varian linac and the relationship between gap error and the corresponding change in dose was measured. The plans were also delivered on eight Varian 2100 seriesmore » linacs (at two institutions) in order to quantify the inter-linac variation in dose before and after fine tuning the MLC calibration. Results: The measured dose differences for each field agreed well with the predictions of LoSasso et al. Using the default MLC calibration, the variation in the physical MLC gap size was determined to be less than 0.4 mm between all linacs studied. The dose difference between the linacs with the largest and smallest physical gap was up to 5.4% (spinal cord region of the head and neck TG119 test case). This difference was reduced to 2.5% after fine tuning the MLC gap calibration. Conclusions: The inter-linac dose precision for DMLC IMRT on Varian linacs can be improved using a simple modification of the MLC calibration procedure that involves fine adjustment of the nominal gap size.« less
Adaptive Controller Adaptation Time and Available Control Authority Effects on Piloting
NASA Technical Reports Server (NTRS)
Trujillo, Anna; Gregory, Irene
2013-01-01
Adaptive control is considered for highly uncertain, and potentially unpredictable, flight dynamics characteristic of adverse conditions. This experiment looked at how adaptive controller adaptation time to recover nominal aircraft dynamics affects pilots and how pilots want information about available control authority transmitted. Results indicate that an adaptive controller that takes three seconds to adapt helped pilots when looking at lateral and longitudinal errors. The controllability ratings improved with the adaptive controller, again the most for the three seconds adaptation time while workload decreased with the adaptive controller. The effects of the displays showing the percentage amount of available safe flight envelope used in the maneuver were dominated by the adaptation time. With the displays, the altitude error increased, controllability slightly decreased, and mental demand increased. Therefore, the displays did require some of the subjects resources but these negatives may be outweighed by pilots having more situation awareness of their aircraft.
Implications of clinical trial design on sample size requirements.
Leon, Andrew C
2008-07-01
The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.
A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
A spacecraft attitude and articulation control system design for the Comet Halley intercept mission
NASA Technical Reports Server (NTRS)
Key, R. W.
1981-01-01
An attitude and articulation control system design for the Comet Halley 1986 intercept mission is presented. A spacecraft dynamics model consisting of five hinge-connected rigid bodies is used to analyze the spacecraft attitude and articulation control system performance. Inertial and optical information are combined to generate scan platform pointing commands. The comprehensive spacecraft model has been developed into a digital computer simulation program, which provides performance characteristics and insight pertaining to the control and dynamics of a Halley Intercept spacecraft. It is shown that scan platform pointing error has a maximum value of 1.8 milliradians during the four minute closest approach interval. It is also shown that the jitter or scan platform pointing rate error would have a maximum value of 2.5 milliradians/second for the nominal 1000 km closest approach distance trajectory and associated environment model.
Combined feedforward and feedback control of a redundant, nonlinear, dynamic musculoskeletal system.
Blana, Dimitra; Kirsch, Robert F; Chadwick, Edward K
2009-05-01
A functional electrical stimulation controller is presented that uses a combination of feedforward and feedback for arm control in high-level injury. The feedforward controller generates the muscle activations nominally required for desired movements, and the feedback controller corrects for errors caused by muscle fatigue and external disturbances. The feedforward controller is an artificial neural network (ANN) which approximates the inverse dynamics of the arm. The feedback loop includes a PID controller in series with a second ANN representing the nonlinear properties and biomechanical interactions of muscles and joints. The controller was designed and tested using a two-joint musculoskeletal model of the arm that includes four mono-articular and two bi-articular muscles. Its performance during goal-oriented movements of varying amplitudes and durations showed a tracking error of less than 4 degrees in ideal conditions, and less than 10 degrees even in the case of considerable fatigue and external disturbances.
A goodness-of-fit test for capture-recapture model M(t) under closure
Stanley, T.R.; Burnham, K.P.
1999-01-01
A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.
Motion compensation and noise tolerance in phase-shifting digital in-line holography.
Stenner, Michael D; Neifeld, Mark A
2006-05-15
We present a technique for phase-shifting digital in-line holography which compensates for lateral object motion. By collecting two frames of interference between object and reference fields with identical reference phase, one can estimate the lateral motion that occurred between frames using the cross-correlation. We also describe a very general linear framework for phase-shifting holographic reconstruction which minimizes additive white Gaussian noise (AWGN) for an arbitrary set of reference field amplitudes and phases. We analyze the technique's sensitivity to noise (AWGN, quantization, and shot), errors in the reference fields, errors in motion estimation, resolution, and depth of field. We also present experimental motion-compensated images achieving the expected resolution.
Space-Based Observations of Satellites From the MOST Microsatellite
2006-11-01
error estimate for these observations. To perform differential photometry, reference magnitudes for the background stars are needed. The Hubble Guide ...22 6.3 External Calibration References ..................................................................... 23 6.4 Post...32 10. References
Molecular Genetics of Successful Smoking Cessation: Convergent Genome-Wide Association Study Results
Uhl, George R.; Liu, Qing-Rong; Drgon, Tomas; Johnson, Catherine; Walther, Donna; Rose, Jed E.; David, Sean P.; Niaura, Ray; Lerman, Caryn
2008-01-01
Context Smoking remains a major public health problem. Twin studies indicate that the ability to quit smoking is substantially heritable, with genetics that overlap modestly with the genetics of vulnerability to dependence on addictive substances. Objectives To identify replicated genes that facilitate smokers’ abilities to achieve and sustain abstinence from smoking (hereinafter referred to as quit-success genes) found in more than 2 genome-wide association (GWA) studies of successful vs unsuccessful abstainers, and, secondarily, to nominate genes for selective involvement in smoking cessation success with bupropion hydrochloride vs nicotine replacement therapy (NRT). Design The GWA results in subjects from 3 centers, with secondary analyses of NRT vs bupropion responders. Setting Outpatient smoking cessation trial participants from 3 centers. Participants European American smokers who successfully vs unsuccessfully abstain from smoking with biochemical confirmation in a smoking cessation trial using NRT, bupropion, or placebo (N=550). Main Outcome Measures Quit-success genes, reproducibly identified by clustered nominally positive single-nucleotide polymorphisms (SNPs) in more than 2 independent samples with significant P values based on Monte Carlo simulation trials. The NRT-selective genes were nominated by clustered SNPs that display much larger t values for NRT vs placebo comparisons. The bupropion-selective genes were nominated by bupropion-selective results. Results Variants in quit-success genes are likely to alter cell adhesion, enzymatic, transcriptional, structural, and DNA, RNA, and/or protein-handling functions. Quit-success genes are identified by clustered nominally positive SNPs from more than 2 samples and are unlikely to represent chance observations (Monte Carlo P < .0003). These genes display modest overlap with genes identified in GWA studies of dependence on addictive substances and memory. Conclusions These results support polygenic genetics for success in abstaining from smoking, overlap with genetics of substance dependence and memory, and nominate gene variants for selective influences on therapeutic responses to bupropion vs NRT. Molecular genetics should help match the types and/or intensity of anti-smoking treatments with the smokers most likely to benefit from them. PMID:18519826
Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Ghrist, Richard W.; Plakalovic, Dragan
2012-01-01
An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Cost-effectiveness of the stream-gaging program in North Carolina
Mason, R.R.; Jackson, N.M.
1985-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.
Different grades MEMS accelerometers error characteristics
NASA Astrophysics Data System (ADS)
Pachwicewicz, M.; Weremczuk, J.
2017-08-01
The paper presents calibration effects of two different MEMS accelerometers of different price and quality grades and discusses different accelerometers errors types. The calibration for error determining is provided by reference centrifugal measurements. The design and measurement errors of the centrifuge are discussed as well. It is shown that error characteristics of the sensors are very different and it is not possible to use simple calibration methods presented in the literature in both cases.
Thrust Stand Characterization of the NASA Evolutionary Xenon Thruster (NEXT)
NASA Technical Reports Server (NTRS)
Diamant, Kevin D.; Pollard, James E.; Crofton, Mark W.; Patterson, Michael J.; Soulas, George C.
2010-01-01
Direct thrust measurements have been made on the NASA Evolutionary Xenon Thruster (NEXT) ion engine using a standard pendulum style thrust stand constructed specifically for this application. Values have been obtained for the full 40-level throttle table, as well as for a few off-nominal operating conditions. Measurements differ from the nominal NASA throttle table 10 (TT10) values by 3.1 percent at most, while at 30 throttle levels (TLs) the difference is less than 2.0 percent. When measurements are compared to TT10 values that have been corrected using ion beam current density and charge state data obtained at The Aerospace Corporation, they differ by 1.2 percent at most, and by 1.0 percent or less at 37 TLs. Thrust correction factors calculated from direct thrust measurements and from The Aerospace Corporation s plume data agree to within measurement error for all but one TL. Thrust due to cold flow and "discharge only" operation has been measured, and analytical expressions are presented which accurately predict thrust based on thermal thrust generation mechanisms.
NASA Technical Reports Server (NTRS)
Calhoun, Philip C.; Hampton, R. David
2004-01-01
The acceleration environment on the International Space Station (ISS) exceeds the requirements of many microgravity experiments. The Glovebox Integrated Microgravity Isolation Technology (g-LIMIT) has been built by the NASA Marshall Space Flight Center to attenuate the nominal acceleration environment and provide some isolation for microgravity science experiments. The g-LIMIT uses Lorentz (voice-coil) magnetic actuators to isolate a platform, for mounting science payloads, from the nominal acceleration environment. The system utilizes payload-acceleration, relative-position, and relative-orientation measurements in a feedback controller to accomplish the vibration isolation task. The controller provides current commands to six magnetic actuators, producing the required experiment isolation from the ISS acceleration environment. The present work documents the development of a candidate control law to meet the acceleration attenuation requirements for the g-LIMIT experiment platform. The controller design is developed using linear optimal control techniques for frequency-weighted H2 norms. Comparison of performance and robustness to plant uncertainty for this control design approach is included in the discussion. System performance is demonstrated in the presence of plant modeling error.
Direct model reference adaptive control with application to flexible robots
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory W.
1992-01-01
A modification to a direct command generator tracker-based model reference adaptive control (MRAC) system is suggested in this paper. This modification incorporates a feedforward into the reference model's output as well as the plant's output. Its purpose is to eliminate the bounded model following error present in steady state when previous MRAC systems were used. The algorithm was evaluated using the dynamics for a single-link flexible-joint arm. The results of these simulations show a response with zero steady state model following error. These results encourage further use of MRAC for various types of nonlinear plants.
Self-referenced locking of optical coherence by single-detector electronic-frequency tagging
NASA Astrophysics Data System (ADS)
Shay, T. M.; Benham, Vincent; Spring, Justin; Ward, Benjamin; Ghebremichael, F.; Culpepper, Mark A.; Sanchez, Anthony D.; Baker, J. T.; Pilkington, D.; Berdine, Richard
2006-02-01
We report a novel coherent beam combining technique. This is the first actively phase locked optical fiber array that eliminates the need for a separate reference beam. In addition, only a single photodetector is required. The far-field central spot of the array is imaged onto the photodetector to produce the phase control loop signals. Each leg of the fiber array is phase modulated with a separate RF frequency, thus tagging the optical phase shift for each leg by a separate RF frequency. The optical phase errors for the individual array legs are separated in the electronic domain. In contrast with the previous active phase locking techniques, in our system the reference beam is spatially overlapped with all the RF modulated fiber leg beams onto a single detector. The phase shift between the optical wave in the reference leg and in the RF modulated legs is measured separately in the electronic domain and the phase error signal is feedback to the LiNbO 3 phase modulator for that leg to minimize the phase error for that leg relative to the reference leg. The advantages of this technique are 1) the elimination of the reference beam and beam combination optics and 2) the electronic separation of the phase error signals without any degradation of the phase locking accuracy. We will present the first theoretical model for self-referenced LOCSET and describe experimental results for a 3 x 3 array.
Embarrassing Pronoun Case Errors [and] When Repeating It's Not Necessary To Use Past Tense.
ERIC Educational Resources Information Center
Arnold, George
2002-01-01
Discusses how to help journalism students avoid pronoun case errors. Notes that many students as well as broadcast journalism professionals make the error of using the past tense when referring to a previous expression or situation that remains current in meaning. (RS)
Brown, Alisa; Uneri, Ali; Silva, Tharindu De; Manbachi, Amir; Siewerdsen, Jeffrey H
2018-04-01
Dynamic reference frames (DRFs) are a common component of modern surgical tracking systems; however, the limited number of commercially available DRFs poses a constraint in developing systems, especially for research and education. This work presents the design and validation of a large, open-source library of DRFs compatible with passive, single-face tracking systems, such as Polaris stereoscopic infrared trackers (NDI, Waterloo, Ontario). An algorithm was developed to create new DRF designs consistent with intra- and intertool design constraints and convert to computer-aided design (CAD) files suitable for three-dimensional printing. A library of 10 such groups, each with 6 to 10 DRFs, was produced and tracking performance was validated in comparison to a standard commercially available reference, including pivot calibration, fiducial registration error (FRE), and target registration error (TRE). Pivot tests showed calibration error [Formula: see text], indistinguishable from the reference. FRE was [Formula: see text], and TRE in a CT head phantom was [Formula: see text], both equivalent to the reference. The library of DRFs offers a useful resource for surgical navigation research and could be extended to other tracking systems and alternative design constraints.
46 CFR 56.30-10 - Flanged joints (modifies 104.5.1(a)).
Code of Federal Regulations, 2011 CFR
2011-10-01
...-10 (b), Method 8. Welding neck flanges may be used on any piping provided the flanges are butt-welded..., refer to 46 CFR 56.30-5(b) for requirements. (9) Figure 56.30-10 (b), Method 9. Welding neck flanges may.... ER16DE08.002 Note to Fig. 56.30-10(b): “T” is the nominal pipe wall thickness used. Consult the text of...
ERIC Educational Resources Information Center
Siriwittayakorn, Teeranoot
2018-01-01
In typological literature, there has been disagreement as to whether there should be distinction between relative clauses (RCs) and nominal sentential complements (NSCs) in pro-drop languages such as Japanese, Chinese, Korean, Khmer and Thai. In pro-drop languages, nouns can be dropped when its reference can be retrieved from context. Therefore,…
Design, evaluation and test of an electronic, multivariable control for the F100 turbofan engine
NASA Technical Reports Server (NTRS)
Skira, C. A.; Dehoff, R. L.; Hall, W. E., Jr.
1980-01-01
A digital, multivariable control design procedure for the F100 turbofan engine is described. The controller is based on locally linear synthesis techniques using linear, quadratic regulator design methods. The control structure uses an explicit model reference form with proportional and integral feedback near a nominal trajectory. Modeling issues, design procedures for the control law and the estimation of poorly measured variables are presented.
Rubio-Fernández, Paula
2016-01-01
Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives (RCAs) are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of RCAs: factors related to the efficiency of color in the visual context and factors related to the semantic category of the noun. The results of Experiment 1 confirmed that people produce RCAs when color may facilitate object recognition; e.g., they do so more often in polychrome displays than in monochrome displays, and more often in English (pre-nominal position) than in Spanish (post-nominal position). RCAs are also used when color is a central property of the object category; e.g., people referred to the color of clothes more often than to the color of geometrical figures (Experiment 1), and they overspecified atypical colors more often than variable and stereotypical colors (Experiment 2). These results are relevant for pragmatic models of referential communication based on Gricean pragmatics and informativeness. An alternative analysis is proposed, which focuses on the efficiency and pertinence of color in a given referential situation. PMID:26924999
NASA Technical Reports Server (NTRS)
Eckstrom, Clinton V.; Murrow, Harold N.; Preisser, John S.
1967-01-01
A ringsail parachute, which had a nominal diameter of 40 feet (12.2 meters) and reference area of 1256 square feet (117 m(exp 2)) and was modified to provide a total geometric porosity of 15 percent of the reference area, was flight tested as part of the rocket launch portion of the NASA Planetary Entry Parachute Program. The payload for the flight test was an instrumented capsule from which the test parachute was ejected by a deployment mortar when the system was at a Mach number of 1.64 and a dynamic pressure of 9.1 pounds per square foot (43.6 newtons per m(exp 2)). The parachute deployed to suspension line stretch in 0.45 second with a resulting snatch force of 1620 pounds (7200 newtons). Canopy inflation began 0.07 second later and the parachute projected area increased slowly to a maximum of 20 percent of that expected for full inflation. During this test, the suspension lines twisted, primarily because the partially inflated canopy could not restrict the twisting to the attachment bridle and risers. This twisting of the suspension lines hampered canopy inflation at a time when velocity and dynamic-pressure conditions were more favorable.
Pressure Loss Predictions of the Reactor Simulator Subsystem at NASA GRC
NASA Technical Reports Server (NTRS)
Reid, Terry V.
2015-01-01
Testing of the Fission Power System (FPS) Technology Demonstration Unit (TDU) is being conducted at NASA GRC. The TDU consists of three subsystems: the Reactor Simulator (RxSim), the Stirling Power Conversion Unit (PCU), and the Heat Exchanger Manifold (HXM). An Annular Linear Induction Pump (ALIP) is used to drive the working fluid. A preliminary version of the TDU system (which excludes the PCU for now), is referred to as the RxSim subsystem and was used to conduct flow tests in Vacuum Facility 6 (VF 6). In parallel, a computational model of the RxSim subsystem was created based on the CAD model and was used to predict loop pressure losses over a range of mass flows. This was done to assess the ability of the pump to meet the design intent mass flow demand. Measured data indicates that the pump can produce 2.333 kg/sec of flow, which is enough to supply the RxSim subsystem with a nominal flow of 1.75 kg/sec. Computational predictions indicated that the pump could provide 2.157 kg/sec (using the Spalart-Allmaras turbulence model), and 2.223 kg/sec (using the k-? turbulence model). The computational error of the predictions for the available mass flow is -0.176 kg/sec (with the S-A turbulence model) and -0.110 kg/sec (with the k-epsilon turbulence model) when compared to measured data.
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
Fuzzy regulator design for wind turbine yaw control.
Theodoropoulos, Stefanos; Kandris, Dionisis; Samarakou, Maria; Koulouras, Grigorios
2014-01-01
This paper proposes the development of an advanced fuzzy logic controller which aims to perform intelligent automatic control of the yaw movement of wind turbines. The specific fuzzy controller takes into account both the wind velocity and the acceptable yaw error correlation in order to achieve maximum performance efficacy. In this way, the proposed yaw control system is remarkably adaptive to the existing conditions. In this way, the wind turbine is enabled to retain its power output close to its nominal value and at the same time preserve its yaw system from pointless movement. Thorough simulation tests evaluate the proposed system effectiveness.
Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa
2017-09-01
Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.
Image-based overlay measurement using subsurface ultrasonic resonance force microscopy
NASA Astrophysics Data System (ADS)
Tamer, M. S.; van der Lans, M. J.; Sadeghian, H.
2018-03-01
Image Based Overlay (IBO) measurement is one of the most common techniques used in Integrated Circuit (IC) manufacturing to extract the overlay error values. The overlay error is measured using dedicated overlay targets which are optimized to increase the accuracy and the resolution, but these features are much larger than the IC feature size. IBO measurements are realized on the dedicated targets instead of product features, because the current overlay metrology solutions, mainly based on optics, cannot provide sufficient resolution on product features. However, considering the fact that the overlay error tolerance is approaching 2 nm, the overlay error measurement on product features becomes a need for the industry. For sub-nanometer resolution metrology, Scanning Probe Microscopy (SPM) is widely used, though at the cost of very low throughput. The semiconductor industry is interested in non-destructive imaging of buried structures under one or more layers for the application of overlay and wafer alignment, specifically through optically opaque media. Recently an SPM technique has been developed for imaging subsurface features which can be potentially considered as a solution for overlay metrology. In this paper we present the use of Subsurface Ultrasonic Resonance Force Microscopy (SSURFM) used for IBO measurement. We used SSURFM for imaging the most commonly used overlay targets on a silicon substrate and photoresist. As a proof of concept we have imaged surface and subsurface structures simultaneously. The surface and subsurface features of the overlay targets are fabricated with programmed overlay errors of +/-40 nm, +/-20 nm, and 0 nm. The top layer thickness changes between 30 nm and 80 nm. Using SSURFM the surface and subsurface features were successfully imaged and the overlay errors were extracted, via a rudimentary image processing algorithm. The measurement results are in agreement with the nominal values of the programmed overlay errors.
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamora, D; Moirano, J; Kanal, K
Purpose: A fundamental measure performed during an annual physics CT evaluation confirms that system displayed CTDIvol nearly matches the independently measured value in phantom. For wide-beam (z-direction) CT scanners, AAPM Report 111 defined an ideal measurement method; however, the method often lacks practicality. The purpose of this preliminary study is to develop a set of conversion factors for a wide-beam CT scanner, relating the CTDIvol measured with a conventional setup (single CTDI phantom) versus the AAPM Report 111 approach (three abutting CTDI phantoms). Methods: For both the body CTDI and head CTDI, two acquisition setups were used: A) conventional singlemore » phantom and B) triple phantom. Of primary concern were the larger nominal beam widths for which a standard CTDI phantom setup would not provide adequate scatter conditions. Nominal beam width (160 or 120 mm) and kVp (100, 120, 140) were modulated based on the underlying clinical protocol. Exposure measurements were taken using a CT pencil ion chamber in the center and 12 o’clock position, and CTDIvol was calculated with ‘nT’ limited to 100 mm. A conversion factor (CF) was calculated as the ratio of CTDIvol measured in setup B versus setup A. Results: For body CTDI, the CF ranged from 1.04 up to 1.10, indicating a 4–10% difference between usage of one and three phantoms. For a nominal beam width of 160 mm, the CF did vary with selected kVp. For head CTDI at nominal beam widths of 120 and 160 mm, the CF was 1.00 and 1.05, respectively, independent of the kVp used (100, 120, and 140). Conclusions: A clear understanding of the manufacturer method of estimating the displayed CTDIvol is important when interpreting annual test results, as the acquisition setup may lead to an error of up to 10%. With appropriately defined CF, single phantom use is feasible.« less
Evaluation of an in-practice wet-chemistry analyzer using canine and feline serum samples.
Irvine, Katherine L; Burt, Kay; Papasouliotis, Kostas
2016-01-01
A wet-chemistry biochemical analyzer was assessed for in-practice veterinary use. Its small size may mean a cost-effective method for low-throughput in-house biochemical analyses for first-opinion practice. The objectives of our study were to determine imprecision, total observed error, and acceptability of the analyzer for measurement of common canine and feline serum analytes, and to compare clinical sample results to those from a commercial reference analyzer. Imprecision was determined by within- and between-run repeatability for canine and feline pooled samples, and manufacturer-supplied quality control material (QCM). Total observed error (TEobs) was determined for pooled samples and QCM. Performance was assessed for canine and feline pooled samples by sigma metric determination. Agreement and errors between the in-practice and reference analyzers were determined for canine and feline clinical samples by Bland-Altman and Deming regression analyses. Within- and between-run precision was high for most analytes, and TEobs(%) was mostly lower than total allowable error. Performance based on sigma metrics was good (σ > 4) for many analytes and marginal (σ > 3) for most of the remainder. Correlation between the analyzers was very high for most canine analytes and high for most feline analytes. Between-analyzer bias was generally attributed to high constant error. The in-practice analyzer showed good overall performance, with only calcium and phosphate analyses identified as significantly problematic. Agreement for most analytes was insufficient for transposition of reference intervals, and we recommend that in-practice-specific reference intervals be established in the laboratory. © 2015 The Author(s).
Workshops Increase Students' Proficiency at Identifying General and APA-Style Writing Errors
ERIC Educational Resources Information Center
Jorgensen, Terrence D.; Marek, Pam
2013-01-01
To determine the effectiveness of 20- to 30-min workshops on recognition of errors in American Psychological Association-style writing, 58 introductory psychology students attended one of the three workshops (on grammar, mechanics, or references) and completed error recognition tests (pretest, initial posttest, and three follow-up tests). As a…
Calibration of remotely sensed proportion or area estimates for misclassification error
Raymond L. Czaplewski; Glenn P. Catts
1992-01-01
Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...
Attention to Form or Meaning? Error Treatment in the Bangalore Project.
ERIC Educational Resources Information Center
Beretta, Alan
1989-01-01
Reports on an evaluation of the Bangalore/Madras Communicational Teaching Project (CTP), a content-based approach to language learning. Analysis of 21 lesson transcripts revealed a greater incidence of error treatment of content than linguistic error, consonant with the CTP focus on meaning rather than form. (26 references) (Author/CB)
GLAS Spacecraft Pointing Study
NASA Technical Reports Server (NTRS)
Born, George H.; Gold, Kenn; Ondrey, Michael; Kubitschek, Dan; Axelrad, Penina; Komjathy, Attila
1998-01-01
Science requirements for the GLAS mission demand that the laser altimeter be pointed to within 50 m of the location of the previous repeat ground track. The satellite will be flown in a repeat orbit of 182 days. Operationally, the required pointing information will be determined on the ground using the nominal ground track, to which pointing is desired, and the current propagated orbit of the satellite as inputs to the roll computation algorithm developed by CCAR. The roll profile will be used to generate a set of fit coefficients which can be uploaded on a daily basis and used by the on-board attitude control system. In addition, an algorithm has been developed for computation of the associated command quaternions which will be necessary when pointing at targets of opportunity. It may be desirable in the future to perform the roll calculation in an autonomous real-time mode on-board the spacecraft. GPS can provide near real-time tracking of the satellite, and the nominal ground track can be stored in the on-board computer. It will be necessary to choose the spacing of this nominal ground track to meet storage requirements in the on-board environment. Several methods for generating the roll profile from a sparse reference ground track are presented.
Reducing measurement errors during functional capacity tests in elders.
da Silva, Mariane Eichendorf; Orssatto, Lucas Bet da Rosa; Bezerra, Ewertton de Souza; Silva, Diego Augusto Santos; Moura, Bruno Monteiro de; Diefenthaeler, Fernando; Freitas, Cíntia de la Rocha
2018-06-01
Accuracy is essential to the validity of functional capacity measurements. To evaluate the error of measurement of functional capacity tests for elders and suggest the use of the technical error of measurement and credibility coefficient. Twenty elders (65.8 ± 4.5 years) completed six functional capacity tests that were simultaneously filmed and timed by four evaluators by means of a chronometer. A fifth evaluator timed the tests by analyzing the videos (reference data). The means of most evaluators for most tests were different from the reference (p < 0.05), except for two evaluators for two different tests. There were different technical error of measurement between tests and evaluators. The Bland-Altman test showed difference in the concordance of the results between methods. Short duration tests showed higher technical error of measurement than longer tests. In summary, tests timed by a chronometer underestimate the real results of the functional capacity. Difference between evaluators' reaction time and perception to determine the start and the end of the tests would justify the errors of measurement. Calculation of the technical error of measurement or the use of the camera can increase data validity.
Csanak, George; Inal, Mokhtar K; Fontes, Christopher John; ...
2015-04-15
The present corrigendum is dedicated to correcting unfortunate errors made in certain equations of our paper [1]. We should first stress the point that those errors have no serious consequences on the main results of the paper and most derived equations remain valid. This is a follow-up to the first corrigendum which was reported in reference [2] to correct errors of a similar nature in another previously reported work [3]. The source of all those errors resides in the treatment of charged-particle scattering and the subtle manipulations made to obtain some of the equations in both references [1, 3]. Allmore » equation numbers cited here correspond to those of [1] unless specified otherwise.« less
ECHO: A reference-free short-read error correction algorithm
Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.
2011-01-01
Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625
Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing
NASA Astrophysics Data System (ADS)
Williams, McKay D.
Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.
Lu, Yongtao; Boudiffa, Maya; Dall'Ara, Enrico; Bellantuono, Ilaria; Viceconti, Marco
2015-11-01
In vivo micro-computed tomography (µCT) scanning is an important tool for longitudinal monitoring of the bone adaptation process in animal models. However, the errors associated with the usage of in vivo µCT measurements for the evaluation of bone adaptations remain unclear. The aim of this study was to evaluate the measurement errors using the bone surface distance approach. The right tibiae of eight 14-week-old C57BL/6 J female mice were consecutively scanned four times in an in vivo µCT scanner using a nominal isotropic image voxel size (10.4 µm) and the tibiae were repositioned between each scan. The repeated scan image datasets were aligned to the corresponding baseline (first) scan image dataset using rigid registration and a region of interest was selected in the proximal tibia metaphysis for analysis. The bone surface distances between the repeated and the baseline scan datasets were evaluated. It was found that the average (±standard deviation) median and 95th percentile bone surface distances were 3.10 ± 0.76 µm and 9.58 ± 1.70 µm, respectively. This study indicated that there were inevitable errors associated with the in vivo µCT measurements of bone microarchitecture and these errors should be taken into account for a better interpretation of bone adaptations measured with in vivo µCT. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Kalman Filter for Spinning Spacecraft Attitude Estimation
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Sedlak, Joseph E.
2008-01-01
This paper presents a Kalman filter using a seven-component attitude state vector comprising the angular momentum components in an inertial reference frame, the angular momentum components in the body frame, and a rotation angle. The relatively slow variation of these parameters makes this parameterization advantageous for spinning spacecraft attitude estimation. The filter accounts for the constraint that the magnitude of the angular momentum vector is the same in the inertial and body frames by employing a reduced six-component error state. Four variants of the filter, defined by different choices for the reduced error state, are tested against a quaternion-based filter using simulated data for the THEMIS mission. Three of these variants choose three of the components of the error state to be the infinitesimal attitude error angles, facilitating the computation of measurement sensitivity matrices and causing the usual 3x3 attitude covariance matrix to be a submatrix of the 6x6 covariance of the error state. These variants differ in their choice for the other three components of the error state. The variant employing the infinitesimal attitude error angles and the angular momentum components in an inertial reference frame as the error state shows the best combination of robustness and efficiency in the simulations. Attitude estimation results using THEMIS flight data are also presented.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
NASA Technical Reports Server (NTRS)
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Bagherpoor, H M; Salmasi, Farzad R
2015-07-01
In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Mayatepek, E; Zelezny, R; Hoffmann, G F
2000-02-25
Cysteinyl leukotrienes (LTC(4), LTD(4), LTE(4)) are potent lipid mediators derived from arachidonate in the 5-lipoxygenase pathway. Recently, the first inborn error of leukotriene synthesis, LTC(4)-synthesis deficiency, has been identified in association with a fatal developmental syndrome. The absence of leukotrienes in cerebrospinal fluid was one of the most striking biochemical findings in this disorder. We analysed leukotrienes in cerebrospinal fluid of patients with a broad spectrum of other well-defined inborn errors of metabolism, including glutathione synthetase deficiency (n=2), Zellweger syndrome (n=3), mitochondrial disorders (n=8), fatty acid oxidation defects (n=7), organic acidurias (n=7), neurotransmitter defects (n=5) and patients with non-specific neurological symptoms, as a reference population (n=120). The concentrations of leukotrienes were not related to age. Representative percentiles were calculated as reference intervals of each leukotriene. In all patients with an inborn error of metabolism concentration of cysteinyl leukotrienes and LTB(4) did not differ from the reference group. Our results indicate that absence of cysteinyl leukotrienes (<5 pg/ml) in association with normal or increased LTB(4) (50.0-67.3 pg/ml) is pathognomonic for LTC(4)-synthesis deficiency. The unique profile of leukotrienes in cerebrospinal fluid in this new disorder is primarily related to the defect and represents a new diagnostic approach.
Radiant Temperature Nulling Radiometer
NASA Technical Reports Server (NTRS)
Ryan, Robert (Inventor)
2003-01-01
A self-calibrating nulling radiometer for non-contact temperature measurement of an object, such as a body of water, employs a black body source as a temperature reference, an optomechanical mechanism, e.g., a chopper, to switch back and forth between measuring the temperature of the black body source and that of a test source, and an infrared detection technique. The radiometer functions by measuring radiance of both the test and the reference black body sources; adjusting the temperature of the reference black body so that its radiance is equivalent to the test source; and, measuring the temperature of the reference black body at this point using a precision contact-type temperature sensor, to determine the radiative temperature of the test source. The radiation from both sources is detected by an infrared detector that converts the detected radiation to an electrical signal that is fed with a chopper reference signal to an error signal generator, such as a synchronous detector, that creates a precision rectified signal that is approximately proportional to the difference between the temperature of the reference black body and that of the test infrared source. This error signal is then used in a feedback loop to adjust the reference black body temperature until it equals that of the test source, at which point the error signal is nulled to zero. The chopper mechanism operates at one or more Hertz allowing minimization of l/f noise. It also provides pure chopping between the black body and the test source and allows continuous measurements.
ERIC Educational Resources Information Center
Williamson, Pamela; Bondy, Elizabeth; Langley, Lisa; Mayne, Dina
2005-01-01
In this article, the authors selected two urban teachers to study, one from 3rd grade and one from 5th (hereafter referred to as Ms. Third and Ms. Fifth), whose students, in spite of the school's failing grade, did well on the exam. Both were nominated as exemplary teachers by their principal and other teachers and had been selected as teacher of…
Improved motion correction in PROPELLER by using grouped blades as reference.
Liu, Zhe; Zhang, Zhe; Ying, Kui; Yuan, Chun; Guo, Hua
2014-03-01
To develop a robust reference generation method for improving PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) reconstruction. A new reference generation method, grouped-blade reference (GBR), is proposed for calculating rotation angle and translation shift in PROPELLER. Instead of using a single-blade reference (SBR) or combined-blade reference (CBR), our method classifies blades by their relative correlations and groups similar blades together as the reference to prevent inconsistent data from interfering the correction process. Numerical simulations and in vivo experiments were used to evaluate the performance of GBR for PROPELLER, which was further compared with SBR and CBR in terms of error level and computation cost. Both simulation and in vivo experiments demonstrate that GBR-based PROPELLER provides better correction for random motion or bipolar motion comparing with SBR or CBR. It not only produces images with lower error level but also needs less iteration steps to converge. A grouped-blade for reference selection was investigated for PROPELLER MRI. It helps to improve the accuracy and robustness of motion correction for various motion patterns. Copyright © 2013 Wiley Periodicals, Inc.
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
Performance Evaluation Of The Antares Reference Telescope System
NASA Astrophysics Data System (ADS)
Parker, J. R.; Woodfin, G. L.; Viswanathan, V. K.
1985-11-01
The Antares Reference Telescope System is a complicated electro-optical-mechanical system whose main purpose is to enable positioning of targets used in the Antares Laser System to within 10 μm of a selected nominal position. To date, it has been used successfully to position targets ranging in size from 300 μm to 2 mm. The system consists of two electro-optical systems positioned in a nearly orthogonal manner. This "cross telescope" configuration facilitates accurate positioning in three planes. The results obtained so far in resolution and positioning of targets using this system are discussed. It is shown that a resolution of 200 1p/mm and a positioning precision of 25 μm can be obtained.
Performance evaluation of the Antares reference telescope system
NASA Astrophysics Data System (ADS)
Parker, J. R.; Woodfin, G. L.; Viswanathan, V. K.
The Antares Reference Telescope System is a complicated electro-optical-mechanical system whose main purpose is to enable positioning of targets used in the Antares Laser System to within 10 microns of a selected nominal position. To date, it has been used successfully to position targets ranging in size from 300 microns to 2 mm. The system consists of two electro-optical systems positioned in a nearly orthogonal manner. This cross telescope configuration facilitates accurate positioning in three planes. The results obtained so far in resolution and positioning of targets using this system are discussed. It is shown that a resolution of 200 lp/mm and a positioning precision of 25 microns can be obtained.
GPS Attitude Determination Using Deployable-Mounted Antennas
NASA Technical Reports Server (NTRS)
Osborne, Michael L.; Tolson, Robert H.
1996-01-01
The primary objective of this investigation is to develop a method to solve for spacecraft attitude in the presence of potential incomplete antenna deployment. Most research on the use of the Global Positioning System (GPS) in attitude determination has assumed that the antenna baselines are known to less than 5 centimeters, or one quarter of the GPS signal wavelength. However, if the GPS antennas are mounted on a deployable fixture such as a solar panel, the actual antenna positions will not necessarily be within 5 cm of nominal. Incomplete antenna deployment could cause the baselines to be grossly in error, perhaps by as much as a meter. Overcoming this large uncertainty in order to accurately determine attitude is the focus of this study. To this end, a two-step solution method is proposed. The first step uses a least-squares estimate of the baselines to geometrically calculate the deployment angle errors of the solar panels. For the spacecraft under investigation, the first step determines the baselines to 3-4 cm with 4-8 minutes of data. A Kalman filter is then used to complete the attitude determination process, resulting in typical attitude errors of 0.50.
NASA Astrophysics Data System (ADS)
Bergese, P.; Bontempi, E.; Depero, L. E.
2006-10-01
X-ray reflectivity (XRR) is a non-destructive, accurate and fast technique for evaluating film density. Indeed, sample-goniometer alignment is a critical experimental factor and the overriding error source in XRR density determination. With commercial single-wavelength X-ray reflectometers, alignment is difficult to control and strongly depends on the operator. In the present work, the contribution of misalignment on density evaluation error is discussed, and a novel procedure (named XRR-density evaluation or XRR-DE method) to minimize the problem will be presented. The method allows to overcome the alignment step through the extrapolation of the correct density value from appropriate non-specular XRR data sets. This procedure is operator independent and suitable for commercial single-wavelength X-ray reflectometers. To test the XRR-DE method, single crystals of TiO 2 and SrTiO 3 were used. In both cases the determined densities differed from the nominal ones less than 5.5%. Thus, the XRR-DE method can be successfully applied to evaluate the density of thin films for which only optical reflectivity is today used. The advantage is that this method can be considered thickness independent.
Ring rolling process simulation for geometry optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.
Astrometry for New Reductions: The ANR method
NASA Astrophysics Data System (ADS)
Robert, Vincent; Le Poncin-Lafitte, Christophe
2018-04-01
Accurate positional measurements of planets and satellites are used to improve our knowledge of their orbits and dynamics, and to infer the accuracy of the planet and satellite ephemerides. With the arrival of the Gaia-DR1 reference star catalog and its complete release afterward, the methods for ground-based astrometry become outdated in terms of their formal accuracy compared to the catalog's which is used. Systematic and zonal errors of the reference stars are eliminated, and the astrometric process now dominates in the error budget. We present a set of algorithms for computing the apparent directions of planets, satellites and stars on any date to micro-arcsecond precision. The expressions are consistent with the ICRS reference system, and define the transformation between theoretical reference data, and ground-based astrometric observables.
Model reference adaptive control of flexible robots in the presence of sudden load changes
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory
1991-01-01
Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.
Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487
NASA Astrophysics Data System (ADS)
Krupka, M.; Kalal, M.; Dostal, J.; Dudzak, R.; Juha, L.
2017-08-01
Classical interferometry became widely used method of active optical diagnostics. Its more advanced version, allowing reconstruction of three sets of data from just one especially designed interferogram (so called complex interferogram) was developed in the past and became known as complex interferometry. Along with the phase shift, which can be also retrieved using classical interferometry, the amplitude modifications of the probing part of the diagnostic beam caused by the object under study (to be called the signal amplitude) as well as the contrast of the interference fringes can be retrieved using the complex interferometry approach. In order to partially compensate for errors in the reconstruction due to imperfections in the diagnostic beam intensity structure as well as for errors caused by a non-ideal optical setup of the interferometer itself (including the quality of its optical components), a reference interferogram can be put to a good use. This method of interferogram analysis of experimental data has been successfully implemented in practice. However, in majority of interferometer setups (especially in the case of the ones employing the wavefront division) the probe and the reference part of the diagnostic beam would feature different intensity distributions over their respective cross sections. This introduces additional error into the reconstruction of the signal amplitude and the fringe contrast, which cannot be resolved using the reference interferogram only. In order to deal with this error it was found that additional separately recorded images of the intensity distribution of the probe and the reference part of the diagnostic beam (with no signal present) are needed. For the best results a sufficient shot-to-shot stability of the whole diagnostic system is required. In this paper, efficiency of the complex interferometry approach for obtaining the highest possible accuracy of the signal amplitude reconstruction is verified using the computer generated complex and reference interferograms containing artificially introduced intensity variations in the probe and the reference part of the diagnostic beam. These sets of data are subsequently analyzed and the errors of the signal amplitude reconstruction are evaluated.
ERIC Educational Resources Information Center
Alamin, Abdulamir; Ahmed, Sawsan
2012-01-01
Analyzing errors committed by second language learners during their first year of study at the University of Taif, can offer insights and knowledge of the learners' difficulties in acquiring technical English communication. With reference to the errors analyzed, the researcher found that the learners' failure to understand basic English grammar…
The Frame Constraint on Experimentally Elicited Speech Errors in Japanese
ERIC Educational Resources Information Center
Saito, Akie; Inoue, Tomoyoshi
2017-01-01
The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the…
The RMI Space Weather and Navigation Systems (SWANS) Project
NASA Astrophysics Data System (ADS)
Warnant, Rene; Lejeune, Sandrine; Wautelet, Gilles; Spits, Justine; Stegen, Koen; Stankov, Stan
The SWANS (Space Weather and Navigation Systems) research and development project (http://swans.meteo.be) is an initiative of the Royal Meteorological Institute (RMI) under the auspices of the Belgian Solar-Terrestrial Centre of Excellence (STCE). The RMI SWANS objectives are: research on space weather and its effects on GNSS applications; permanent mon-itoring of the local/regional geomagnetic and ionospheric activity; and development/operation of relevant nowcast, forecast, and alert services to help professional GNSS/GALILEO users in mitigating space weather effects. Several SWANS developments have already been implemented and available for use. The K-LOGIC (Local Operational Geomagnetic Index K Calculation) system is a nowcast system based on a fully automated computer procedure for real-time digital magnetogram data acquisition, data screening, and calculating the local geomagnetic K index. Simultaneously, the planetary Kp index is estimated from solar wind measurements, thus adding to the service reliability and providing forecast capabilities as well. A novel hybrid empirical model, based on these ground-and space-based observations, has been implemented for nowcasting and forecasting the geomagnetic index, issuing also alerts whenever storm-level activity is indicated. A very important feature of the nowcast/forecast system is the strict control on the data input and processing, allowing for an immediate assessment of the output quality. The purpose of the LIEDR (Local Ionospheric Electron Density Reconstruction) system is to acquire and process data from simultaneous ground-based GNSS TEC and digital ionosonde measurements, and subsequently to deduce the vertical electron density distribution. A key module is the real-time estimation of the ionospheric slab thickness, offering additional infor-mation on the local ionospheric dynamics. The RTK (Real Time Kinematic) status mapping provides a quick look at the small-scale ionospheric effects on the RTK precision for several GPS stations in Belgium. The service assesses the effect of small-scale ionospheric irregularities by monitoring the high-frequency TEC rate of change at any given station. This assessment results in a (colour) code assigned to each station, code ranging from "quiet" (green) to "extreme" (red) and referring to the local ionospheric conditions. Alerts via e-mail are sent to subscribed users when disturbed conditions are observed. SoDIPE (Software for Determining the Ionospheric Positioning Error) estimates the position-ing error due to the ionospheric conditions only (called "ionospheric error") in high-precision positioning applications (RTK in particular). For each of the Belgian Active Geodetic Network (AGN) baselines, SoDIPE computes the ionospheric error and its median value (every 15 min-utes). Again, a (colour) code is assigned to each baseline, ranging from "nominal" (green) to "extreme" (red) error level. Finally, all available baselines (drawn in colour corresponding to error level) are displayed on a map of Belgium. The future SWANS work will focus on regional ionospheric monitoring and developing various other nowcast and forecast services.
[Professional error and nursing ethics: from past consideration to future strategy].
Germini, Francesco; Lattarulo, Pio
2008-01-01
In 1960, the National Federation IPASVI emanated its first ethical code which does not deal at all with the prevention of error or how to behave in the case this does happen, with the exception of point 6, which recommends scrupulously respecting the therapy prescribed and encouraging patients to trust the physicians and the other health workers. The second ethical code was dated 1977. In this eighteen year interval the hospital organization had been deeply modified and this new layout of the Code reflected some remarkable changes of thought but no precise reference to the matter of error management. In the 1999 version of the code the radical changes in the profession are reflected and formally recognized by the law (42/1999) and by society acting as a reference for the regulation of the nursing profession and referring to one of the most ancient principles of medicine, the "primum non nocere". It is important to remember that an ethical code derives from professional considerations, applied to the context of "here and now". Some strategic considerations for the future regarding the important role of risk prevention and management of errors (which do, unfortunately, occur) are therefore expressed.
Exploring the Function Space of Deep-Learning Machines
NASA Astrophysics Data System (ADS)
Li, Bo; Saad, David
2018-06-01
The function space of deep-learning machines is investigated by studying growth in the entropy of functions of a given error with respect to a reference function, realized by a deep-learning machine. Using physics-inspired methods we study both sparsely and densely connected architectures to discover a layerwise convergence of candidate functions, marked by a corresponding reduction in entropy when approaching the reference function, gain insight into the importance of having a large number of layers, and observe phase transitions as the error increases.
Orbital-free bond breaking via machine learning
NASA Astrophysics Data System (ADS)
Snyder, John C.; Rupp, Matthias; Hansen, Katja; Blooston, Leo; Müller, Klaus-Robert; Burke, Kieron
2013-12-01
Using a one-dimensional model, we explore the ability of machine learning to approximate the non-interacting kinetic energy density functional of diatomics. This nonlinear interpolation between Kohn-Sham reference calculations can (i) accurately dissociate a diatomic, (ii) be systematically improved with increased reference data and (iii) generate accurate self-consistent densities via a projection method that avoids directions with no data. With relatively few densities, the error due to the interpolation is smaller than typical errors in standard exchange-correlation functionals.
2003-09-01
590-595, September 1996. Deitel , H.M., Deitel , P.J., Nieto, T.R., Lin, T.M., Sadhu, P., XML: How to Program , Prentice Hall, 2001. Du, Y...communications will result in a total track following error equal to the sum of the errors for the two vehicles........48 xv Figure 36. Test point programming ...Refer to (Hunter 2000), ( Deitel 2001), or similar references for additional information regarding the XML standard. Figure 17. XML example
Bad Science and Its Social Implications.
ERIC Educational Resources Information Center
Zeidler, Dana L.; Sadler, Troy D.; Berson, Michael J.; Fogelman, Aimee L.
2002-01-01
Investigates three types of bad science: (1) cultural prejudice based on scientific errors (polygenism, phrenology, reification through intelligence testing); (2) unethical science (Tuskegee syphilis experiments, tobacco companies and research); and (3) unwitting errors (pesticides, chlorofluorocarbons). (Contains 50 references.) (SK)
Shunt regulation electric power system
NASA Technical Reports Server (NTRS)
Wright, W. H.; Bless, J. J. (Inventor)
1971-01-01
A regulated electric power system having load and return bus lines is described. A plurality of solar cells interconnected in a power supplying relationship and having a power shunt tap point electrically spaced from the bus lines is provided. A power dissipator is connected to the shunt tap point and provides for a controllable dissipation of excess energy supplied by the solar cells. A dissipation driver is coupled to the power dissipator and controls its conductance and dissipation and is also connected to the solar cells in a power taping relationship to derive operating power therefrom. An error signal generator is coupled to the load bus and to a reference signal generator to provide an error output signal which is representative of the difference between the electric parameters existing at the load bus and the reference signal generator. An error amplifier is coupled to the error signal generator and the dissipation driver to provide the driver with controlling signals.
Irradiance measurement errors due to the assumption of a Lambertian reference panel
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Kirchner, J. A.
1982-01-01
A technique is presented for determining the error in diurnal irradiance measurements that results from the non-Lambertian behavior of a reference panel under various irradiance conditions. Spectral biconical reflectance factors of a spray-painted barium sulfate panel, along with simulated sky radiance data for clear and hazy skies at six solar zenith angles, were used to calculate the estimated panel irradiances and true irradiances for a nadir-looking sensor in two wavelength bands. The inherent errors in total spectral irradiance (0.68 microns) for a clear sky were 0.60, 6.0, 13.0, and 27.0% for solar zenith angles of 0, 45, 60, and 75 deg, respectively. The technique can be used to characterize the error of a specific panel used in field measurements, and thus eliminate any ambiguity of the effects of the type, preparation, and aging of the paint.
NASA Astrophysics Data System (ADS)
Appleby, Graham; Rodríguez, José; Altamimi, Zuheir
2016-12-01
Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
The contribution of low-energy protons to the total on-orbit SEU rate
Dodds, Nathaniel Anson; Martinez, Marino J.; Dodd, Paul E.; ...
2015-11-10
Low- and high-energy proton experimental data and error rate predictions are presented for many bulk Si and SOI circuits from the 20-90 nm technology nodes to quantify how much low-energy protons (LEPs) can contribute to the total on-orbit single-event upset (SEU) rate. Every effort was made to predict LEP error rates that are conservatively high; even secondary protons generated in the spacecraft shielding have been included in the analysis. Across all the environments and circuits investigated, and when operating within 10% of the nominal operating voltage, LEPs were found to increase the total SEU rate to up to 4.3 timesmore » as high as it would have been in the absence of LEPs. Therefore, the best approach to account for LEP effects may be to calculate the total error rate from high-energy protons and heavy ions, and then multiply it by a safety margin of 5. If that error rate can be tolerated then our findings suggest that it is justified to waive LEP tests in certain situations. Trends were observed in the LEP angular responses of the circuits tested. As a result, grazing angles were the worst case for the SOI circuits, whereas the worst-case angle was at or near normal incidence for the bulk circuits.« less
The Robustness of Acoustic Analogies
NASA Technical Reports Server (NTRS)
Freund, J. B.; Lele, S. K.; Wei, M.
2004-01-01
Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.
Analytical and Photogrammetric Characterization of a Planar Tetrahedral Truss
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Adams, Richard R.; Rhodes, Marvin D.
1990-01-01
Future space science missions are likely to require near-optical quality reflectors which are supported by a stiff truss structure. This support truss should conform closely with its intended shape to minimize its contribution to the overall surface error of the reflector. The current investigation was conducted to evaluate the planar surface accuracy of a regular tetrahedral truss structure by comparing the results of predicted and measured node locations. The truss is a 2-ring hexagonal structure composed of 102 equal-length truss members. Each truss member is nominally 2 meters in length between node centers and is comprised of a graphite/epoxy tube with aluminum nodes and joints. The axial stiffness and the length variation of the truss components were determined experimentally and incorporated into a static finite element analysis of the truss. From this analysis, the root mean square (RMS) surface error of the truss was predicted to be 0.11 mm (0004 in). Photogrammetry tests were performed on the assembled truss to measure the normal displacements of the upper surface nodes and to determine if the truss would maintain its intended shape when subjected to repeated assembly. Considering the variation in the truss component lengths, the measures rms error of 0.14 mm (0.006 in) in the assembled truss is relatively small. The test results also indicate that a repeatable truss surface is achievable. Several potential sources of error were identified and discussed.
Global and regional kinematics with GPS
NASA Technical Reports Server (NTRS)
King, Robert W.
1994-01-01
The inherent precision of the doubly differenced phase measurement and the low cost of instrumentation made GPS the space geodetic technique of choice for regional surveys as soon as the constellation reached acceptable geometry in the area of interest: 1985 in western North America, the early 1990's in most of the world. Instrument and site-related errors for horizontal positioning are usually less than 3 mm, so that the dominant source of error is uncertainty in the reference frame defined by the satellites orbits and the tracking stations used to determine them. Prior to about 1992, when the tracking network for most experiments was globally sparse, the number of fiducial sites or the level at which they could be tied to an SLR or VLBI reference frame usually, set the accuracy limit. Recently, with a global network of over 30 stations, the limit is set more often by deficiencies in models for non-gravitational forces acting on the satellites. For regional networks in the northern hemisphere, reference frame errors are currently about 3 parts per billion (ppb) in horizontal position, allowing centimeter-level accuracies over intercontinental distances and less than 1 mm for a 100 km baseline. The accuracy of GPS measurements for monitoring height variations is generally 2-3 times worse than for horizontal motions. As for VLBI, the primary source of error is unmodeled fluctuations in atmospheric water vapor, but both reference frame uncertainties and some instrument errors are more serious for vertical than horizontal measurements. Under good conditions, daily repeatabilities at the level of 10 mm rms were achieved. This paper will summarize the current accuracy of GPS measurements and their implication for the use of SLR to study regional kinematics.
Baxter, Suzanne Domel; Smith, Albert F; Hardin, James W; Nichols, Michele D
2007-04-01
Validation study data are used to illustrate that conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information-conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Children were observed eating school meals on 1 day (n=12), or 2 (n=13) or 3 (n=79) nonconsecutive days separated by >or=25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (ie, protein, carbohydrate, and fat), and compared. For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), and inflation ratios (error measures). Mixed-model analyses. Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (all four P values >0.61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (all four P values <0.04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. When analyzed using the reporting-error-sensitive approach, children's dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients.
Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.
2008-01-01
Objective Validation-study data are used to illustrate that conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information—conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Subjects and design Children were observed eating school meals on one day (n = 12), or two (n = 13) or three (n = 79) nonconsecutive days separated by ≥25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (protein, carbohydrate, fat), and compared. Main outcome measures For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), inflation ratios (error measures). Statistical analyses Mixed-model analyses. Results Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (Ps > .61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (Ps < .04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. Conclusions When analyzed using the reporting-error-sensitive approach, children’s dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. Applications The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients. PMID:17383265
NASA Technical Reports Server (NTRS)
Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.
2011-01-01
Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.
van Isterdael, C E D; Stilma, J S; Bezemer, P D; Tijmes, N T
2008-05-03
A study into the treatment of refractive errors and cataract in a selected population with learning disabilities. Design. Retrospective. In the years 1993-2003, 5205 people (mean age: 39 years) were referred to the visual advisory centre of Bartiméus (one of three institutes for the visually impaired in the Netherlands) by learning disability physicians and were assessed ophthalmologically. This assessment consisted of a measurement of visual acuity and refractive error, slitlamp examination and retinoscopy, and was performed at the client's accommodation. Advised treatment for spectacle prescriptions and referral for cataract surgery were registered. Refractive errors were found in 35% (1845/5205) of the patients with learning disabilities; 49% (905/1845) already wore spectacles; another 14% (265/1845) were prescribed spectacles for the first time. Of those with presbyopia, 12% (232/1865) had reading glasses and 10% (181/1865) were given a first prescription for spectacles. The most important determinant for not prescribing spectacles was: presence of severe learning disability (odds ratio (OR): 3.7). Cataract was present in 10% (497/5205) of the population; 399 patients were advised to be referred for surgery, 55% (219/399) were referred ofwhom 26% (57/219) had surgery. Moderately severe bilateral cataract was the only determinant of cataract surgery (OR: 7.8). Refractive errors and cataract were not always treated in this group. One of the reasons for non-treatment of refractive errors was a severe learning disability. The reason for treatment or non-treatment in patients with cataract was less clear.
Luskin, Matthew Scott; Albert, Wido Rizki; Tobler, Mathias W
2018-02-12
In the original version of the Article, reference 18 was incorrectly numbered as reference 30, and references 19 to 30 were incorrectly numbered as 18 to 29. These errors have now been corrected in the PDF and HTML versions of the manuscript.
Reference-free error estimation for multiple measurement methods.
Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga
2018-01-01
We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
Improved Statistics for Genome-Wide Interaction Analysis
Ueki, Masao; Cordell, Heather J.
2012-01-01
Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new “joint effects” statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al.'s originally-proposed statistics, on account of the inflated error rate that can result. PMID:22496670
Quotation accuracy in medical journal articles-a systematic review and meta-analysis.
Jergas, Hannah; Baethge, Christopher
2015-01-01
Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.
Research on calibration error of carrier phase against antenna arraying
NASA Astrophysics Data System (ADS)
Sun, Ke; Hou, Xiaomin
2016-11-01
It is the technical difficulty of uplink antenna arraying that signals from various quarters can not be automatically aligned at the target in deep space. The size of the far-field power combining gain is directly determined by the accuracy of carrier phase calibration. It is necessary to analyze the entire arraying system in order to improve the accuracy of the phase calibration. This paper analyzes the factors affecting the calibration error of carrier phase of uplink antenna arraying system including the error of phase measurement and equipment, the error of the uplink channel phase shift, the position error of ground antenna, calibration receiver and target spacecraft, the error of the atmospheric turbulence disturbance. Discuss the spatial and temporal autocorrelation model of atmospheric disturbances. Each antenna of the uplink antenna arraying is no common reference signal for continuous calibration. So it must be a system of the periodic calibration. Calibration is refered to communication of one or more spacecrafts in a certain period. Because the deep space targets are not automatically aligned to multiplexing received signal. Therefore the aligned signal should be done in advance on the ground. Data is shown that the error can be controlled within the range of demand by the use of existing technology to meet the accuracy of carrier phase calibration. The total error can be controlled within a reasonable range.
Precision electronic speed controller for an alternating-current motor
Bolie, V.W.
A high precision controller for an alternating-current multi-phase electrical motor that is subject to a large inertial load. The controller was developed for controlling, in a neutron chopper system, a heavy spinning rotor that must be rotated in phase-locked synchronism with a reference pulse train that is representative of an ac power supply signal having a meandering line frequency. The controller includes a shaft revolution sensor which provides a feedback pulse train representative of the actual speed of the motor. An internal digital timing signal generator provides a reference signal which is compared with the feedback signal in a computing unit to provide a motor control signal. The motor control signal is a weighted linear sum of a speed error voltage, a phase error voltage, and a drift error voltage, each of which is computed anew with each revolution of the motor shaft. The speed error signal is generated by a novel vernier-logic circuit which is drift-free and highly sensitive to small speed changes. The phase error is also computed by digital logic, with adjustable sensitivity around a 0 mid-scale value. The drift error signal, generated by long-term counting of the phase error, is used to compensate for any slow changes in the average friction drag on the motor. An auxillary drift-byte status sensor prevents any disruptive overflow or underflow of the drift-error counter. An adjustable clocked-delay unit is inserted between the controller and the source of the reference pulse train to permit phase alignment of the rotor to any desired offset angle. The stator windings of the motor are driven by two amplifiers which are provided with input signals having the proper quadrature relationship by an exciter unit consisting of a voltage controlled oscillator, a binary counter, a pair of read-only memories, and a pair of digital-to-analog converters.
Local and global evaluation for remote sensing image segmentation
NASA Astrophysics Data System (ADS)
Su, Tengfei; Zhang, Shengwei
2017-08-01
In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.
Method and apparatus for correcting eddy current signal voltage for temperature effects
Kustra, Thomas A.; Caffarel, Alfred J.
1990-01-01
An apparatus and method for measuring physical characteristics of an electrically conductive material by the use of eddy-current techniques and compensating measurement errors caused by changes in temperature includes a switching arrangement connected between primary and reference coils of an eddy-current probe which allows the probe to be selectively connected between an eddy current output oscilloscope and a digital ohm-meter for measuring the resistances of the primary and reference coils substantially at the time of eddy current measurement. In this way, changes in resistance due to temperature effects can be completely taken into account in determining the true error in the eddy current measurement. The true error can consequently be converted into an equivalent eddy current measurement correction.
[Tasks and duties of veterinary reference laboratories for food borne zoonoses].
Ellerbroek, Lüppo; Alter, T; Johne, R; Nöckler, K; Beutin, L; Helmuth, R
2009-02-01
Reference laboratories are of central importance for consumer protection. Field expertise and high scientific competence are basic requirements for the nomination of a national reference laboratory. To ensure a common approach in the analysis of zoonotic hazards, standards have been developed by the reference laboratories together with national official laboratories on the basis of Art. 33 of Directive (EG) No. 882/2004. Reference laboratories function as arbitrative boards in the case of ambivalent or debatable results. New methods for detection of zoonotic agents are developed and validated to provide tools for analysis, e. g., in legal cases, if results from different parties are disputed. Besides these tasks, national reference laboratories offer capacity building and advanced training courses and control the performance of ring trials to ensure consistency in the quality of analyses in official laboratories. All reference laboratories work according to the ISO standard 17025 which defines the grounds for strict laboratory quality rules and in cooperation with the respective Community Reference Laboratories (CRL). From the group of veterinary reference laboratories for food-borne zoonoses, the national reference laboratories are responsible for Listeria monocytogenes, for Campylobacter, for the surveillance and control of viral and bacterial contamination of bivalve molluscs, for E. coli, for the performance of analysis and tests on zoonoses (Salmonella), and from the group of parasitological zoonotic agents, the national reference laboratory for Trichinella.
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.
1989-01-01
Control systems for advanced aircraft, especially those with relaxed static stability, will be critical to flight and will, therefore, have very high reliability specifications which must be met for adverse as well as nominal operating conditions. Adverse conditions can result from electromagnetic disturbances caused by lightning, high energy radio frequency transmitters, and nuclear electromagnetic pulses. Tools and techniques must be developed to verify the integrity of the control system in adverse operating conditions. The most difficult and illusive perturbations to computer based control systems caused by an electromagnetic environment (EME) are functional error modes that involve no component damage. These error modes are collectively known as upset, can occur simultaneously in all of the channels of a redundant control system, and are software dependent. A methodology is presented for performing upset tests on a multichannel control system and considerations are discussed for the design of upset tests to be conducted in the lab on fault tolerant control systems operating in a closed loop with a simulated plant.
Use Of Adaptive Optics Element For Wavefront Error Correction In The Gemini CO2 Laser Fusion System
NASA Astrophysics Data System (ADS)
Viswanathan, V. K.; Parker, J. V.; Nussmier, T. A.; Swigert, C. J.; King, W.; Lau, A. S.; Price, K.
1980-11-01
The Gemini two beam CO2 laser fusion system incorporates a complex optical system with nearly 100 surfaces per beam, associated with the generation, transport and focusing of CO2 laser beams for irradiating laser fusion targets. Even though the system is nominally diffraction limited, in practice the departure from the ideal situation drops the Strehl ratio to 0.24. This departure is caused mostly by the imperfections in the large (34 cm optical clear aperture diameter) state-of-the-art components like the sodium chloride windows and micromachined mirrors. While the smaller optical components also contribute to this degradation, the various possible misalignments and nonlinear effects are considered to contribute very little to it. Analysis indicates that removing the static or quasi-static errors can dramatically improve the Strehl ratio. A deformable mirror which can comfortably achieve the design goal Strehl ratio of >= 0.7 is described, along with the various system trade-offs in the design of the mirror and the control system.
Global Precipitation Measurement Mission Launch and Commissioning
NASA Technical Reports Server (NTRS)
Davis, Nikesha; DeWeese, Keith; Vess, Melissa; O'Donnell, James R., Jr.; Welter, Gary
2015-01-01
During launch and early operation of the Global Precipitation Measurement (GPM) Mission, the Guidance, Navigation, and Control (GN&C) analysis team encountered four main on-orbit anomalies. These include: (1) unexpected shock from Solar Array deployment, (2) momentum buildup from the Magnetic Torquer Bars (MTBs) phasing errors, (3) transition into Safehold due to albedo induced Course Sun Sensor (CSS) anomaly, and (4) a flight software error that could cause a Safehold transition due to a Star Tracker occultation. This paper will discuss ways GN&C engineers identified the anomalies and tracked down the root causes. Flight data and GN&C on-board models will be shown to illustrate how each of these anomalies were investigated and mitigated before causing any harm to the spacecraft. On May 29, 2014, GPM was handed over to the Mission Flight Operations Team after a successful commissioning period. Currently, GPM is operating nominally on orbit, collecting meaningful scientific data that will significantly improve our understanding of the Earth's climate and water cycle.
Hedden, Sarra L; Woolson, Robert F; Carter, Rickey E; Palesch, Yuko; Upadhyaya, Himanshu P; Malcolm, Robert J
2009-07-01
"Loss to follow-up" can be substantial in substance abuse clinical trials. When extensive losses to follow-up occur, one must cautiously analyze and interpret the findings of a research study. Aims of this project were to introduce the types of missing data mechanisms and describe several methods for analyzing data with loss to follow-up. Furthermore, a simulation study compared Type I error and power of several methods when missing data amount and mechanism varies. Methods compared were the following: Last observation carried forward (LOCF), multiple imputation (MI), modified stratified summary statistics (SSS), and mixed effects models. Results demonstrated nominal Type I error for all methods; power was high for all methods except LOCF. Mixed effect model, modified SSS, and MI are generally recommended for use; however, many methods require that the data are missing at random or missing completely at random (i.e., "ignorable"). If the missing data are presumed to be nonignorable, a sensitivity analysis is recommended.
Savičiūtė, Eglė; Ambridge, Ben; Pine, Julian M
2018-05-01
Four- and five-year-old children took part in an elicited familiar and novel Lithuanian noun production task to test predictions of input-based accounts of the acquisition of inflectional morphology. Two major findings emerged. First, as predicted by input-based accounts, correct production rates were correlated with the input frequency of the target form, and with the phonological neighbourhood density of the noun. Second, the error patterns were not compatible with the systematic substitution of target forms by either (a) the most frequent form of that noun or (b) a single morphosyntactic default form, as might be predicted by naive versions of a constructivist and generativist account, respectively. Rather, most errors reflected near-miss substitutions of singular for plural, masculine for feminine, or nominative/accusative for a less frequent case. Together, these findings provide support for an input-based approach to morphological acquisition, but are not adequately explained by any single account in its current form.
Alternative stitching method for massively parallel e-beam lithography
NASA Astrophysics Data System (ADS)
Brandt, Pieter; Tranquillin, Céline; Wieland, Marco; Bayle, Sébastien; Milléquant, Matthieu; Renault, Guillaume
2015-03-01
In this study a novel stitching method other than Soft Edge (SE) and Smart Boundary (SB) is introduced and benchmarked against SE. The method is based on locally enhanced Exposure Latitude without cost of throughput, making use of the fact that the two beams that pass through the stitching region can deposit up to 2x the nominal dose. The method requires a complex Proximity Effect Correction that takes a preset stitching dose profile into account. On a Metal clip at minimum half-pitch of 32 nm for MAPPER FLX 1200 tool specifications, the novel stitching method effectively mitigates Beam to Beam (B2B) position errors such that they do not induce increase in CD Uniformity (CDU). In other words, the same CDU can be realized inside the stitching region as outside the stitching region. For the SE method, the CDU inside is 0.3 nm higher than outside the stitching region. 5 nm direct overlay impact from B2B position errors cannot be reduced by a stitching strategy.
Global Precipitation Measurement Mission Launch and Commissioning
NASA Technical Reports Server (NTRS)
Davis, Nikesha; Deweese, Keith; Vess, Missie; Welter, Gary; O'Donnell, James R., Jr.
2015-01-01
During launch and early operation of the Global Precipitation Measurement (GPM) Mission, the Guidance, Navigation and Control (GNC) analysis team encountered four main on orbit anomalies. These include: (1) unexpected shock from Solar Array deployment, (2) momentum buildup from the Magnetic Torquer Bars (MTBs) phasing errors, (3) transition into Safehold due to albedo-induced Course Sun Sensor (CSS) anomaly, and (4) a flight software error that could cause a Safehold transition due to a Star Tracker occultation. This paper will discuss ways GNC engineers identified and tracked down the root causes. Flight data and GNC on board models will be shown to illustrate how each of these anomalies were investigated and mitigated before causing any harm to the spacecraft. On May 29, 2014, GPM was handed over to the Mission Flight Operations Team after a successful commissioning period. Currently, GPM is operating nominally on orbit, collecting meaningful scientific data that will significantly improve our understanding of the Earth's climate and water cycle.
Screening actuator locations for static shape control
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1990-01-01
Correction of shape distortion due to zero-mean normally distributed errors in structural sizes which are random variables is examined. A bound on the maximum improvement in the expected value of the root-mean-square shape error is obtained. The shape correction associated with the optimal actuators is also characterized. An actuator effectiveness index is developed and shown to be helpful in screening actuator locations in the structure. The results are specialized to a simple form for truss structures composed of nominally identical members. The bound and effectiveness index are tested on a 55-m radiometer antenna truss structure. It is found that previously obtained results for optimum actuators had a performance close to the bound obtained here. Furthermore, the actuators associated with the optimum design are shown to have high effectiveness indices. Since only a small fraction of truss elements tend to have high effectiveness indices, the proposed screening procedure can greatly reduce the number of truss members that need to be considered as actuator sites.
A comparison of exact tests for trend with binary endpoints using Bartholomew's statistic.
Consiglio, J D; Shan, G; Wilding, G E
2014-01-01
Tests for trend are important in a number of scientific fields when trends associated with binary variables are of interest. Implementing the standard Cochran-Armitage trend test requires an arbitrary choice of scores assigned to represent the grouping variable. Bartholomew proposed a test for qualitatively ordered samples using asymptotic critical values, but type I error control can be problematic in finite samples. To our knowledge, use of the exact probability distribution has not been explored, and we study its use in the present paper. Specifically we consider an approach based on conditioning on both sets of marginal totals and three unconditional approaches where only the marginal totals corresponding to the group sample sizes are treated as fixed. While slightly conservative, all four tests are guaranteed to have actual type I error rates below the nominal level. The unconditional tests are found to exhibit far less conservatism than the conditional test and thereby gain a power advantage.
THE DiskMass SURVEY. III. STELLAR KINEMATICS VIA CROSS-CORRELATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westfall, Kyle B.; Bershady, Matthew A.; Verheijen, Marc A. W., E-mail: westfall@astro.rug.nl, E-mail: mab@astro.wisc.edu, E-mail: verheyen@astro.rug.nl
2011-03-15
We describe a new cross-correlation (CC) approach used by our survey to derive stellar kinematics from galaxy-continuum spectroscopy. This approach adopts the formal error analysis derived by Statler, but properly handles spectral masks. Thus, we address the primary concerns regarding application of the CC method to censored data, while maintaining its primary advantage by consolidating kinematic and template-mismatch information toward different regions of the CC function. We identify a systematic error in the nominal CC method of approximately 10% in velocity dispersion incurred by a mistreatment of detector-censored data, which is eliminated by our new method. We derive our approachmore » from first principles, and we use Monte Carlo simulations to demonstrate its efficacy. An identical set of Monte Carlo simulations performed using the well-established penalized-pixel-fitting code of Cappellari and Emsellem compares favorably with the results from our newly implemented software. Finally, we provide a practical demonstration of this software by extracting stellar kinematics from SparsePak spectra of UGC 6918.« less
Human Behaviour in Long-Term Missions
NASA Technical Reports Server (NTRS)
1997-01-01
In this session, Session WP1, the discussion focuses on the following topics: Psychological Support for International Space Station Mission; Psycho-social Training for Man in Space; Study of the Physiological Adaptation of the Crew During A 135-Day Space Simulation; Interpersonal Relationships in Space Simulation, The Long-Term Bed Rest in Head-Down Tilt Position; Psychological Adaptation in Groups of Varying Sizes and Environments; Deviance Among Expeditioners, Defining the Off-Nominal Act in Space and Polar Field Analogs; Getting Effective Sleep in the Space-Station Environment; Human Sleep and Circadian Rhythms are Altered During Spaceflight; and Methodological Approach to Study of Cosmonauts Errors and Its Instrumental Support.
Impact of deposition-rate fluctuations on thin-film thickness and uniformity
Oliver, Joli B.
2016-11-04
Variations in deposition rate are superimposed on a thin-film–deposition model with planetary rotation to determine the impact on film thickness. Variations in magnitude and frequency of the fluctuations relative to the speed of planetary revolution lead to thickness errors and uniformity variations up to 3%. Sufficiently rapid oscillations in the deposition rate have a negligible impact, while slow oscillations are found to be problematic, leading to changes in the nominal film thickness. Finally, superimposing noise as random fluctuations in the deposition rate has a negligible impact, confirming the importance of any underlying harmonic oscillations in deposition rate or source operation.
Fuzzy Regulator Design for Wind Turbine Yaw Control
Koulouras, Grigorios
2014-01-01
This paper proposes the development of an advanced fuzzy logic controller which aims to perform intelligent automatic control of the yaw movement of wind turbines. The specific fuzzy controller takes into account both the wind velocity and the acceptable yaw error correlation in order to achieve maximum performance efficacy. In this way, the proposed yaw control system is remarkably adaptive to the existing conditions. In this way, the wind turbine is enabled to retain its power output close to its nominal value and at the same time preserve its yaw system from pointless movement. Thorough simulation tests evaluate the proposed system effectiveness. PMID:24693237
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
A numerical fragment basis approach to SCF calculations.
NASA Astrophysics Data System (ADS)
Hinde, Robert J.
1997-11-01
The counterpoise method is often used to correct for basis set superposition error in calculations of the electronic structure of bimolecular systems. One drawback of this approach is the need to specify a ``reference state'' for the system; for reactive systems, the choice of an unambiguous reference state may be difficult. An example is the reaction F^- + HCl arrow HF + Cl^-. Two obvious reference states for this reaction are F^- + HCl and HF + Cl^-; however, different counterpoise-corrected interaction energies are obtained using these two reference states. We outline a method for performing SCF calculations which employs numerical basis functions; this method attempts to eliminate basis set superposition errors in an a priori fashion. We test the proposed method on two one-dimensional, three-center systems and discuss the possibility of extending our approach to include electron correlation effects.
MRAC Revisited: Guaranteed Performance with Reference Model Modification
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmaje
2010-01-01
This paper presents modification of the conventional model reference adaptive control (MRAC) architecture in order to achieve guaranteed transient performance both in the output and input signals of an uncertain system. The proposed modification is based on the tracking error feedback to the reference model. It is shown that approach guarantees tracking of a given command and the ideal control signal (one that would be designed if the system were known) not only asymptotically but also in transient by a proper selection of the error feedback gain. The method prevents generation of high frequency oscillations that are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference command of any magnitude form any initial position without re-tuning. The benefits of the method are demonstrated in simulations.
Refinement of ground reference data with segmented image data
NASA Technical Reports Server (NTRS)
Robinson, Jon W.; Tilton, James C.
1991-01-01
One of the ways to determine ground reference data (GRD) for satellite remote sensing data is to photo-interpret low altitude aerial photographs and then digitize the cover types on a digitized tablet and register them to 7.5 minute U.S.G.S. maps (that were themselves digitized). The resulting GRD can be registered to the satellite image or, vice versa. Unfortunately, there are many opportunities for error when using digitizing tablet and the resolution of the edges for the GRD depends on the spacing of the points selected on the digitizing tablet. One of the consequences of this is that when overlaid on the image, errors and missed detail in the GRD become evident. An approach is discussed for correcting these errors and adding detail to the GRD through the use of a highly interactive, visually oriented process. This process involves the use of overlaid visual displays of the satellite image data, the GRD, and a segmentation of the satellite image data. Several prototype programs were implemented which provide means of taking a segmented image and using the edges from the reference data to mask out these segment edges that are beyond a certain distance from the reference data edges. Then using the reference data edges as a guide, those segment edges that remain and that are judged not to be image versions of the reference edges are manually marked and removed. The prototype programs that were developed and the algorithmic refinements that facilitate execution of this task are described.
Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis
NASA Technical Reports Server (NTRS)
Slojkowski, Steven E.
2014-01-01
Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.
Evaluation of a scale-model experiment to investigate long-range acoustic propagation
NASA Technical Reports Server (NTRS)
Parrott, Tony L.; Mcaninch, Gerry L.; Carlberg, Ingrid A.
1987-01-01
Tests were conducted to evaluate the feasibility of using a scale-model experiment situated in an anechoic facility to investigate long-range sound propagation over ground terrain. For a nominal scale factor of 100:1, attenuations along a linear array of six microphones colinear with a continuous-wave type of sound source were measured over a wavelength range from 10 to 160 for a nominal test frequency of 10 kHz. Most tests were made for a hard model surface (plywood), but limited tests were also made for a soft model surface (plywood with felt). For grazing-incidence propagation over the hard surface, measured and predicted attenuation trends were consistent for microphone locations out to between 40 and 80 wavelengths. Beyond 80 wavelengths, significant variability was observed that was caused by disturbances in the propagation medium. Also, there was evidence of extraneous propagation-path contributions to data irregularities at more remote microphones. Sensitivity studies for the hard-surface and microphone indicated a 2.5 dB change in the relative excess attenuation for a systematic error in source and microphone elevations on the order of 1 mm. For the soft-surface model, no comparable sensitivity was found.
A Robust Parameterization of Human Gait Patterns Across Phase-Shifting Perturbations
Villarreal, Dario J.; Poonawala, Hasan A.; Gregg, Robert D.
2016-01-01
The phase of human gait is difficult to quantify accurately in the presence of disturbances. In contrast, recent bipedal robots use time-independent controllers relying on a mechanical phase variable to synchronize joint patterns through the gait cycle. This concept has inspired studies to determine if human joint patterns can also be parameterized by a mechanical variable. Although many phase variable candidates have been proposed, it remains unclear which, if any, provide a robust representation of phase for human gait analysis or control. In this paper we analytically derive an ideal phase variable (the hip phase angle) that is provably monotonic and bounded throughout the gait cycle. To examine the robustness of this phase variable, ten able-bodied human subjects walked over a platform that randomly applied phase-shifting perturbations to the stance leg. A statistical analysis found the correlations between nominal and perturbed joint trajectories to be significantly greater when parameterized by the hip phase angle (0.95+) than by time or a different phase variable. The hip phase angle also best parameterized the transient errors about the nominal periodic orbit. Finally, interlimb phasing was best explained by local (ipsilateral) hip phase angles that are synchronized during the double-support period. PMID:27187967
Sensitivity of grass and alfalfa reference evapotranspiration to weather station sensor accuracy
USDA-ARS?s Scientific Manuscript database
A sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1991 to 2008 from an autom...
Improved method for implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, F. B.; Martin, W. R.
2001-01-01
The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and themore » accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.« less
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris
2015-09-01
A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.
Prediction of the reference evapotranspiration using a chaotic approach.
Wang, Wei-guang; Zou, Shan; Luo, Zhao-hui; Zhang, Wei; Chen, Dan; Kong, Jun
2014-01-01
Evapotranspiration is one of the most important hydrological variables in the context of water resources management. An attempt was made to understand and predict the dynamics of reference evapotranspiration from a nonlinear dynamical perspective in this study. The reference evapotranspiration data was calculated using the FAO Penman-Monteith equation with the observed daily meteorological data for the period 1966-2005 at four meteorological stations (i.e., Baotou, Zhangbei, Kaifeng, and Shaoguan) representing a wide range of climatic conditions of China. The correlation dimension method was employed to investigate the chaotic behavior of the reference evapotranspiration series. The existence of chaos in the reference evapotranspiration series at the four different locations was proved by the finite and low correlation dimension. A local approximation approach was employed to forecast the daily reference evapotranspiration series. Low root mean square error (RSME) and mean absolute error (MAE) (for all locations lower than 0.31 and 0.24, resp.), high correlation coefficient (CC), and modified coefficient of efficiency (for all locations larger than 0.97 and 0.8, resp.) indicate that the predicted reference evapotranspiration agrees well with the observed one. The encouraging results indicate the suitableness of chaotic approach for understanding and predicting the dynamics of the reference evapotranspiration.
Technical editing of research reports in biomedical journals.
Wager, Elizabeth; Middleton, Philippa
2008-10-08
Most journals try to improve their articles by technical editing processes such as proof-reading, editing to conform to 'house styles', grammatical conventions and checking accuracy of cited references. Despite the considerable resources devoted to technical editing, we do not know whether it improves the accessibility of biomedical research findings or the utility of articles. This is an update of a Cochrane methodology review first published in 2003. To assess the effects of technical editing on research reports in peer-reviewed biomedical journals, and to assess the level of accuracy of references to these reports. We searched The Cochrane Library Issue 2, 2007; MEDLINE (last searched July 2006); EMBASE (last searched June 2007) and checked relevant articles for further references. We also searched the Internet and contacted researchers and experts in the field. Prospective or retrospective comparative studies of technical editing processes applied to original research articles in biomedical journals, as well as studies of reference accuracy. Two review authors independently assessed each study against the selection criteria and assessed the methodological quality of each study. One review author extracted the data, and the second review author repeated this. We located 32 studies addressing technical editing and 66 surveys of reference accuracy. Only three of the studies were randomised controlled trials. A 'package' of largely unspecified editorial processes applied between acceptance and publication was associated with improved readability in two studies and improved reporting quality in another two studies, while another study showed mixed results after stricter editorial policies were introduced. More intensive editorial processes were associated with fewer errors in abstracts and references. Providing instructions to authors was associated with improved reporting of ethics requirements in one study and fewer errors in references in two studies, but no difference was seen in the quality of abstracts in one randomised controlled trial. Structuring generally improved the quality of abstracts, but increased their length. The reference accuracy studies showed a median citation error rate of 38% and a median quotation error rate of 20%. Surprisingly few studies have evaluated the effects of technical editing rigorously. However there is some evidence that the 'package' of technical editing used by biomedical journals does improve papers. A substantial number of references in biomedical articles are cited or quoted inaccurately.
The Propagation of Solar Energetic Particles as Observed by the Stereo Spacecraft and Near Earth
NASA Astrophysics Data System (ADS)
von Rosenvinge, T. T.; Richardson, I. G.; Cane, H. V.; Christian, E. R.; Cummings, A. C.; Cohen, C. M.; Leske, R. A.; Mewaldt, R. A.; Stone, E. C.; Wiedenbeck, M. E.
2014-12-01
Over 200 Solar Energetic Particle Events (SEPs) with protons > 25 MeV have been identified using data from the IMPACT HET telescopes on the STEREO A and B spacecraft and similar data from SoHO near Earth. The properties of these events are tabulated in a recent publication in Solar Physics (Richardson, et al., 2014). One of the goals of the Stereo Mission is to better understand the propagation of SEPs. The properties of events observed by multiple spacecraft on average are well-organized by the distance of the footpoints of the nominal Parker Spiral magnetic field lines passing the observing spacecraft from the parent active regions. However, some events deviate significantly from this pattern. For example, in events observed by three spacecraft, the spacecraft with the best nominal connection does not necessarily observe the highest intensity or earliest particle arrival time. We will search for such events and try to relate their behavior to non-nominal magnetic field patterns. We will look, for example, for the effects of the interplanetary current sheet, the influence of magnetic clouds which are thought to contain large magnetic loops with both ends connected to the sun (a large departure from the Parker spiral), and also whether particle propagation can be disrupted by the presence of interplanetary shocks. Reference: Richardson et al., Solar Phys. 289, 3059, 2014
Alcohol perceptions and behavior in a residential peer social network.
Kenney, Shannon R; Ott, Miles; Meisel, Matthew K; Barnett, Nancy P
2017-01-01
Personalized normative feedback is a recommended component of alcohol interventions targeting college students. However, normative data are commonly collected through campus-based surveys, not through actual participant-referent relationships. In the present investigation, we examined how misperceptions of residence hall peers, both overall using a global question and those designated as important peers using person-specific questions, were related to students' personal drinking behaviors. Participants were 108 students (88% freshman, 54% White, 51% female) residing in a single campus residence hall. Participants completed an online baseline survey in which they reported their own alcohol use and perceptions of peer alcohol use using both an individual peer network measure and a global peer perception measure of their residential peers. We employed network autocorrelation models, which account for the inherent correlation between observations, to test hypotheses. Overall, participants accurately perceived the drinking of nominated friends but overestimated the drinking of residential peers. Consistent with hypotheses, overestimating nominated friend and global residential peer drinking predicted higher personal drinking, although perception of nominated peers was a stronger predictor. Interaction analyses showed that the relationship between global misperception and participant self-reported drinking was significant for heavy drinkers, but not non-heavy drinkers. The current findings explicate how student perceptions of peer drinking within an established social network influence drinking behaviors, which may be used to enhance the effectiveness of normative feedback interventions. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
A Quatro-Based 65-nm Flip-Flop Circuit for Soft-Error Resilience
NASA Astrophysics Data System (ADS)
Li, Y.-Q.; Wang, H.-B.; Liu, R.; Chen, L.; Nofal, I.; Shi, S.-T.; He, A.-L.; Guo, G.; Baeg, S. H.; Wen, S.-J.; Wong, R.; Chen, M.; Wu, Q.
2017-06-01
A flip-flop circuit hardened against soft errors is presented in this paper. This design is an improved version of Quatro for further enhanced soft-error resilience by integrating the guard-gate technique. The proposed design, as well as reference Quatro and regular flip-flops, was implemented and manufactured in a 65-nm CMOS bulk technology. Experimental characterization results of their alpha and heavy ions soft-error rates verified the superior hardening performance of the proposed design over the other two circuits.
Cha, Dong Ik; Lee, Min Woo; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Kim, Kyunga
2017-10-01
To identify the more accurate reference data sets for fusion imaging-guided radiofrequency ablation or biopsy of hepatic lesions between computed tomography (CT) and magnetic resonance (MR) images. This study was approved by the institutional review board, and written informed consent was received from all patients. Twelve consecutive patients who were referred to assess the feasibility of radiofrequency ablation or biopsy were enrolled. Automatic registration using CT and MR images was performed in each patient. Registration errors during optimal and opposite respiratory phases, time required for image fusion and number of point locks used were compared using the Wilcoxon signed-rank test. The registration errors during optimal respiratory phase were not significantly different between image fusion using CT and MR images as reference data sets (p = 0.969). During opposite respiratory phase, the registration error was smaller with MR images than CT (p = 0.028). The time and the number of points locks needed for complete image fusion were not significantly different between CT and MR images (p = 0.328 and p = 0.317, respectively). MR images would be more suitable as the reference data set for fusion imaging-guided procedures of focal hepatic lesions than CT images.
An explanatory heuristic gives rise to the belief that words are well suited for their referents.
Sutherland, Shelbie L; Cimpian, Andrei
2015-10-01
The mappings between the words of a language and their meanings are arbitrary. There is, for example, nothing inherently dog-like about the word dog. And yet, building on prior evidence (e.g., Brook, 1970; Piaget, 1967), the six studies reported here (N=1062) suggest that both children and (at least to some extent) adults see a special "fit" between objects and their names, as if names were particularly suitable or appropriate for the objects they denote. These studies also provide evidence for a novel proposal concerning the source of these nominal fit beliefs. Specifically, beliefs about nominal fit may be a byproduct of the heuristic processes that people use to make sense of the world more generally (Cimpian & Salomon, 2014a). In sum, the present studies provide new insights into how people conceive of language and demonstrate that these conceptions are rooted in the processes that underlie broader explanatory reasoning. Copyright © 2015 Elsevier B.V. All rights reserved.
Transmission Loss Calculation using A and B Loss Coefficients in Dynamic Economic Dispatch Problem
NASA Astrophysics Data System (ADS)
Jethmalani, C. H. Ram; Dumpa, Poornima; Simon, Sishaj P.; Sundareswaran, K.
2016-04-01
This paper analyzes the performance of A-loss coefficients while evaluating transmission losses in a Dynamic Economic Dispatch (DED) Problem. The performance analysis is carried out by comparing the losses computed using nominal A loss coefficients and nominal B loss coefficients in reference with load flow solution obtained by standard Newton-Raphson (NR) method. Density based clustering method based on connected regions with sufficiently high density (DBSCAN) is employed in identifying the best regions of A and B loss coefficients. Based on the results obtained through cluster analysis, a novel approach in improving the accuracy of network loss calculation is proposed. Here, based on the change in per unit load values between the load intervals, loss coefficients are updated for calculating the transmission losses. The proposed algorithm is tested and validated on IEEE 6 bus system, IEEE 14 bus, system IEEE 30 bus system and IEEE 118 bus system. All simulations are carried out using SCILAB 5.4 (www.scilab.org) which is an open source software.
The Language of Scholarship: How to Rapidly Locate and Avoid Common APA Errors.
Freysteinson, Wyona M; Krepper, Rebecca; Mellott, Susan
2015-10-01
This article is relevant for nurses and nursing students who are writing scholarly documents for work, school, or publication and who have a basic understanding of American Psychological Association (APA) style. Common APA errors on the reference list and in citations within the text are reviewed. Methods to quickly find and reduce those errors are shared. Copyright 2015, SLACK Incorporated.
Errors in MR-based attenuation correction for brain imaging with PET/MR scanners
NASA Astrophysics Data System (ADS)
Rota Kops, Elena; Herzog, Hans
2013-02-01
AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal cavity yielded an overestimation in cerebellum up to 5%. ConclusionsThe present error analysis confirms that our template-based attenuation method provides reliable attenuation corrections of PET brain imaging measured in PET/MR scanners.
NASA Technical Reports Server (NTRS)
Garner, H. D. (Inventor)
1977-01-01
This invention employs a magnetometer as a magnetic heading reference for a vehicle such as a small aircraft. The magnetometer is mounted on a directional dial in the aircraft in the vicinity of the pilot such that it is free to turn with the dial about the yaw axis of the aircraft. The invention includes a circuit for generating a signal proportional to the northerly turning error produced in the magnetometer due to the vertical component of the earth's magnetic field. This generated signal is then subtracted from the output of the magnetometer to compensate for the northerly turning error.
Erratum: "Discovery of a Second Millisecond Accreting Pulsar: XTE J1751-305"
NASA Technical Reports Server (NTRS)
Markwardt, Craig; Swank, J. H.; Strohmayer, T. E.; in 'tZand, J. J. M.; Marshall, F. E.
2007-01-01
The original Table 1 ("Timing Parameters of XTE J1751-305") contains one error. The epoch of pulsar mean longitude 90deg is incorrect due to a numerical conversion error in the preparation of the original table text. A corrected version of Table 1 is shown. For reference, the epoch of the ascending node is also included. The correct value was used in all of the analysis leading up to the paper. As T(sub 90) is a purely fiducial reference time, the scientific conclusions of the paper are unchanged.
Constructivism, Factoring, and Beliefs.
ERIC Educational Resources Information Center
Rauff, James V.
1994-01-01
Discusses errors made by remedial intermediate algebra students in factoring polynomials in light of student definitions of factoring. Found certain beliefs about factoring to logically imply many of the errors made. Suggests that belief-based teaching can be successful in teaching factoring. (16 references) (Author/MKR)
Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y
2014-01-01
Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
Tomar, Dheeraj S; Weber, Valéry; Pettitt, B Montgomery; Asthagiri, D
2014-04-17
The hydration thermodynamics of the amino acid X relative to the reference G (glycine) or the hydration thermodynamics of a small-molecule analog of the side chain of X is often used to model the contribution of X to protein stability and solution thermodynamics. We consider the reasons for successes and limitations of this approach by calculating and comparing the conditional excess free energy, enthalpy, and entropy of hydration of the isoleucine side chain in zwitterionic isoleucine, in extended penta-peptides, and in helical deca-peptides. Butane in gauche conformation serves as a small-molecule analog for the isoleucine side chain. Parsing the hydrophobic and hydrophilic contributions to hydration for the side chain shows that both of these aspects of hydration are context-sensitive. Furthermore, analyzing the solute-solvent interaction contribution to the conditional excess enthalpy of the side chain shows that what is nominally considered a property of the side chain includes entirely nonobvious contributions of the background. The context-sensitivity of hydrophobic and hydrophilic hydration and the conflation of background contributions with energetics attributed to the side chain limit the ability of a single scaling factor, such as the fractional solvent exposure of the group in the protein, to map the component energetic contributions of the model-compound data to their value in the protein. But ignoring the origin of cancellations in the underlying components the group-transfer model may appear to provide a reasonable estimate of the free energy for a given error tolerance.
Ellenberger, David; Friede, Tim
2016-08-05
Methods for change point (also sometimes referred to as threshold or breakpoint) detection in binary sequences are not new and were introduced as early as 1955. Much of the research in this area has focussed on asymptotic and exact conditional methods. Here we develop an exact unconditional test. An unconditional exact test is developed which assumes the total number of events as random instead of conditioning on the number of observed events. The new test is shown to be uniformly more powerful than Worsley's exact conditional test and means for its efficient numerical calculations are given. Adaptions of methods by Berger and Boos are made to deal with the issue that the unknown event probability imposes a nuisance parameter. The methods are compared in a Monte Carlo simulation study and applied to a cohort of patients undergoing traumatic orthopaedic surgery involving external fixators where a change in pin site infections is investigated. The unconditional test controls the type I error rate at the nominal level and is uniformly more powerful than (or to be more precise uniformly at least as powerful as) Worsley's exact conditional test which is very conservative for small sample sizes. In the application a beneficial effect associated with the introduction of a new treatment procedure for pin site care could be revealed. We consider the new test an effective and easy to use exact test which is recommended in small sample size change point problems in binary sequences.
DETERMINING MOTOR INERTIA OF A STRESS-CONTROLLED RHEOMETER.
Klemuk, Sarah A; Titze, Ingo R
2009-01-01
Viscoelastic measurements made with a stress-controlled rheometer are affected by system inertia. Of all contributors to system inertia, motor inertia is the largest. Its value is usually determined empirically and precision is rarely if ever specified. Inertia uncertainty has negligible effects on rheologic measurements below the coupled motor/plate/sample resonant frequency. But above the resonant frequency, G' values of soft viscoelastic materials such as dispersions, gels, biomaterials, and non-Newtonian polymers, err quadratically due to inertia uncertainty. In the present investigation, valid rheologic measurements were achieved near and above the coupled resonant frequency for a non-Newtonian reference material. At these elevated frequencies, accuracy in motor inertia is critical. Here we compare two methods for determining motor-inertia accurately. For the first (commercially-used) phase method, frequency responses of standard fluids were measured. Phase between G' and G" was analyzed at 5-70 Hz for motor inertia values of 50-150% of the manufacturer's nominal value. For a newly-devised two-plate method (10 mm and 60 mm parallel plates), dynamic measurements of a non-Newtonian standard were collected. Using a linear equation of motion with inertia, viscosity, and elasticity coefficients, G' expressions for both plates were equated and motor inertia was determined to be accurate (by comparison to the phase method) with a precision of ± 3%. The newly developed two-plate method had advantages of expressly eliminating dependence on gap, was explicitly derived from basic principles, quantified the error, and required fewer experiments than the commercially used phase method.