NASA Astrophysics Data System (ADS)
Fulbright, Jon; Anderson, Samuel; Lei, Ning; Efremova, Boryana; Wang, Zhipeng; McIntire, Jeffrey; Chiang, Kwofu; Xiong, Xiaoxiong
2014-11-01
Due to a software error, the solar and lunar vectors reported in the on-board calibrator intermediate product (OBC-IP) files for SNPP VIIRS are incorrect. The magnitude of the error is about 0.2 degree, and the magnitude is increasing by about 0.01 degree per year. This error, although small, has an effect on the radiometric calibration of the reflective solar bands (RSB) because accurate solar angles are required for calculating the screen transmission functions and for calculating the illumination of the Solar Diffuser panel. In this paper, we describe the error in the Common GEO code, and how it may be fixed. We present evidence for the error from within the OBC-IP data. We also describe the effects of the solar vector error on the RSB calibration and the Sensor Data Record (SDR). In order to perform this evaluation, we have reanalyzed the yaw-maneuver data to compute the vignetting functions required for the on-orbit SD RSB radiometric calibration. After the reanalysis, we find effect of up to 0.5% on the shortwave infrared (SWIR) RSB calibration.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.
Xi, Lei; Zhang, Chen; He, Yanling
2018-05-09
To evaluate the refractive and visual outcomes of Transepithelial photorefractive keratectomy (TransPRK) in the treatment of low to moderate myopic astigmatism. This retrospective study enrolled a total of 47 eyes that had undergone Transepithelial photorefractive keratectomy. Preoperative cylinder diopters ranged from - 0.75D to - 2.25D (mean - 1.11 ± 0.40D), and the sphere was between - 1.50D to - 5.75D. Visual outcomes and vector analysis of astigmatism that included error ratio (ER), correction ratio (CR), error of magnitude (EM) and error of angle (EA) were evaluated. At 6 months after TransPRK, all eyes had an uncorrected distance visual acuity of 20/20 or better, no eyes lost ≥2 lines of corrected distant visual acuity (CDVA), and 93.6% had residual refractive cylinder within ±0.50D of intended correction. On vector analysis, the mean correction ratio for refractive cylinder was 1.03 ± 0.30. The mean error magnitude was - 0.04 ± 0.36. The mean error of angle was 0.44° ± 7.42°and 80.9% of eyes had axis shift within ±10°. The absolute astigmatic error of magnitude was statistically significantly correlated with the intended cylinder correction (r = 0.48, P < 0.01). TransPRK showed safe, effective and predictable results in the correction of low to moderate astigmatism and myopia.
Spaceflight Ka-Band High-Rate Radiation-Hard Modulator
NASA Technical Reports Server (NTRS)
Jaso, Jeffery M.
2011-01-01
A document discusses the creation of a Ka-band modulator developed specifically for the NASA/GSFC Solar Dynamics Observatory (SDO). This flight design consists of a high-bandwidth, Quadriphase Shift Keying (QPSK) vector modulator with radiation-hardened, high-rate driver circuitry that receives I and Q channel data. The radiationhard design enables SDO fs Ka-band communications downlink system to transmit 130 Mbps (300 Msps after data encoding) of science instrument data to the ground system continuously throughout the mission fs minimum life of five years. The low error vector magnitude (EVM) of the modulator lowers the implementation loss of the transmitter in which it is used, thereby increasing the overall communication system link margin. The modulator comprises a component within the SDO transmitter, and meets the following specifications over a 0 to 40 C operational temperature range: QPSK/OQPSK modulator, 300-Msps symbol rate, 26.5-GHz center frequency, error vector magnitude less than or equal to 10 percent rms, and compliance with the NTIA (National Telecommunications and Information Administration) spectral mask.
Zhang, Lijun; Sy, Mary Ellen; Mai, Harry; Yu, Fei; Hamilton, D Rex
2015-01-01
To compare the prediction error after toric intraocular lens (IOL) (Acrysof IQ) implantation using corneal astigmatism measurements obtained with an IOLMaster automated keratometer and a Galilei dual rotating camera Scheimpflug-Placido tomographer. Jules Stein Eye Institute, University of California Los Angeles, Los Angeles, California, USA. Retrospective case series. The predicted residual astigmatism after toric IOL implantation was calculated using preoperative astigmatism values from an automated keratometer and the total corneal power (TCP) determined by ray tracing through the measured anterior and posterior corneal surfaces using dual Scheimpflug-Placido tomography. The prediction error was calculated as the difference between the predicted astigmatism and the manifest astigmatism at least 1 month postoperatively. The calculations included vector analysis. The study evaluated 35 eyes (35 patients). The preoperative corneal posterior astigmatism mean magnitude was 0.33 diopter (D) ± 0.16 (SD) (vector mean 0.23 × 176). Twenty-six eyes (74.3%) had with-the-rule (WTR) posterior astigmatism. The postoperative manifest refractive astigmatism mean magnitude was 0.38 ± 0.18 D (vector mean 0.26 × 171). There was no statistically significant difference in the mean magnitude prediction error between the automated keratometer and TCP techniques. However, the automated keratometer method tended to overcorrect WTR astigmatism and undercorrect against-the-rule (ATR) astigmatism. The TCP technique lacked these biases. The automated keratometer and TCP methods for estimating the magnitude of corneal astigmatism gave similar results. However, the automated keratometer method tended to overcorrect WTR astigmatism and undercorrect ATR astigmatism. Dr. Hamilton has received honoraria for educational lectures from Ziemer Ophthalmic Systems. No other author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
Gordon, H R; Wang, M
1992-07-20
The first step in the coastal zone color scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering contribution, Lr(r), to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm Lr(r), is computed by assuming that the ocean surface is flat. Computations of the radiance leaving a Rayleigh-scattering atmosphere overlying a rough Fresnel-reflecting ocean are presented to assess the radiance error caused by the flat-ocean assumption. The surface-roughness model is described in detail for both scalar and vector (including polarization) radiative transfer theory. The computations utilizing the vector theory show that the magnitude of the error significantly depends on the assumptions made in regard to the shadowing of one wave by another. In the case of the coastal zone color scanner bands, we show that for moderate solar zenith angles the error is generally below the 1 digital count level, except near the edge of the scan for high wind speeds. For larger solar zenith angles, the error is generally larger and can exceed 1 digital count at some wavelengths over the entire scan, even for light winds. The error in Lr(r) caused by ignoring surface roughness is shown to be the same order of magnitude as that caused by uncertainties of +/- 15 mb in the surface atmospheric pressure or of +/- 50 Dobson units in the ozone concentration. For future sensors, which will have greater radiometric sensitivity, the error caused by the flat-ocean assumption in the computation of Lr(r) could be as much as an order of magnitude larger than the noise-equivalent spectral radiance in certain situations.
SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenton, O; Valdes, G; Yin, L
Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. Themore » calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.« less
A map overlay error model based on boundary geometry
Gaeuman, D.; Symanzik, J.; Schmidt, J.C.
2005-01-01
An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.
Wind estimates from cloud motions: Phase 1 of an in situ aircraft verification experiment
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Shenk, W. E.; Skillman, W.
1974-01-01
An initial experiment was conducted to verify geostationary satellite derived cloud motion wind estimates with in situ aircraft wind velocity measurements. Case histories of one-half hour to two hours were obtained for 3-10km diameter cumulus cloud systems on 6 days. Also, one cirrus cloud case was obtained. In most cases the clouds were discrete enough that both the cloud motion and the ambient wind could be measured with the same aircraft Inertial Navigation System (INS). Since the INS drift error is the same for both the cloud motion and wind measurements, the drift error subtracts out of the relative motion determinations. The magnitude of the vector difference between the cloud motion and the ambient wind at the cloud base averaged 1.2 m/sec. The wind vector at higher levels in the cloud layer differed by about 3 m/sec to 5 m/sec from the cloud motion vector.
Characterization of a 300-GHz Transmission System for Digital Communications
NASA Astrophysics Data System (ADS)
Hudlička, Martin; Salhi, Mohammed; Kleine-Ostmann, Thomas; Schrader, Thorsten
2017-08-01
The paper presents the characterization of a 300-GHz transmission system for modern digital communications. The quality of the modulated signal at the output of the system (error vector magnitude, EVM) is measured using a vector signal analyzer. A method using a digital real-time oscilloscope and consecutive mathematical processing in a computer is shown for analysis of signals with bandwidths exceeding that of state-of-the-art vector signal analyzers. The uncertainty of EVM measured using the real-time oscilloscope is open to analysis. Behaviour of the 300-GHz transmission system is studied with respect to various modulation schemes and different signal symbol rates.
Orion Exploration Flight Test-1 Contingency Drogue Deploy Velocity Trigger
NASA Technical Reports Server (NTRS)
Gay, Robert S.; Stochowiak, Susan; Smith, Kelly
2013-01-01
As a backup to the GPS-aided Kalman filter and the Barometric altimeter, an "adjusted" velocity trigger is used during entry to trigger the chain of events that leads to drogue chute deploy for the Orion Multi-Purpose Crew Vehicle (MPCV) Exploration Flight Test-1 (EFT-1). Even though this scenario is multiple failures deep, the Orion Guidance, Navigation, and Control (GN&C) software makes use of a clever technique that was taken from the Mars Science Laboratory (MSL) program, which recently successfully landing the Curiosity rover on Mars. MSL used this technique to jettison the heat shield at the proper time during descent. Originally, Orion use the un-adjusted navigated velocity, but the removal of the Star Tracker to save costs for EFT-1, increased attitude errors which increased inertial propagation errors to the point where the un-adjusted velocity caused altitude dispersions at drogue deploy to be too large. Thus, to reduce dispersions, the velocity vector is projected onto a "reference" vector that represents the nominal "truth" vector at the desired point in the trajectory. Because the navigation errors are largely perpendicular to the truth vector, this projection significantly reduces dispersions in the velocity magnitude. This paper will detail the evolution of this trigger method for the Orion project and cover the various methods tested to determine the reference "truth" vector; and at what point in the trajectory it should be computed.
2007-03-01
32 4.4 Algorithm Pseudo - Code ...................................................................................34 4.5 WIND Interface With a...difference estimates of xc temporal derivatives, or by using a polynomial fit to the previous values of xc. 34 4.4 ALGORITHM PSEUDO - CODE Pseudo ...Phase Shift Keying DQPSK Differential Quadrature Phase Shift Keying EVM Error Vector Magnitude FFT Fast Fourier Transform FPGA Field Programmable
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2011-01-01
Deformable Image Registration is a complex optimization algorithm with the goal of modeling a non-rigid transformation between two images. A crucial issue in this field is guaranteeing the user a robust but computationally reasonable algorithm. We rank the performances of four stopping criteria and six stopping value computation strategies for a log domain deformable registration. The stopping criteria we test are: (a) velocity field update magnitude, (b) vector field Jacobian, (c) mean squared error, and (d) harmonic energy. Experiments demonstrate that comparing the metric value over the last three iterations with the metric minimum of between four and six previous iterations is a robust and appropriate strategy. The harmonic energy and vector field update magnitude metrics give the best results in terms of robustness and speed of convergence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.
Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less
Kalman Filter for Spinning Spacecraft Attitude Estimation
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Sedlak, Joseph E.
2008-01-01
This paper presents a Kalman filter using a seven-component attitude state vector comprising the angular momentum components in an inertial reference frame, the angular momentum components in the body frame, and a rotation angle. The relatively slow variation of these parameters makes this parameterization advantageous for spinning spacecraft attitude estimation. The filter accounts for the constraint that the magnitude of the angular momentum vector is the same in the inertial and body frames by employing a reduced six-component error state. Four variants of the filter, defined by different choices for the reduced error state, are tested against a quaternion-based filter using simulated data for the THEMIS mission. Three of these variants choose three of the components of the error state to be the infinitesimal attitude error angles, facilitating the computation of measurement sensitivity matrices and causing the usual 3x3 attitude covariance matrix to be a submatrix of the 6x6 covariance of the error state. These variants differ in their choice for the other three components of the error state. The variant employing the infinitesimal attitude error angles and the angular momentum components in an inertial reference frame as the error state shows the best combination of robustness and efficiency in the simulations. Attitude estimation results using THEMIS flight data are also presented.
NASA Technical Reports Server (NTRS)
Sylvester, W. B.
1984-01-01
A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.
Xue, Min; Pan, Shilong; Zhao, Yongjiu
2015-02-15
A novel optical vector network analyzer (OVNA) based on optical single-sideband (OSSB) modulation and balanced photodetection is proposed and experimentally demonstrated, which can eliminate the measurement error induced by the high-order sidebands in the OSSB signal. According to the analytical model of the conventional OSSB-based OVNA, if the optical carrier in the OSSB signal is fully suppressed, the measurement result is exactly the high-order-sideband-induced measurement error. By splitting the OSSB signal after the optical device-under-test (ODUT) into two paths, removing the optical carrier in one path, and then detecting the two signals in the two paths using a balanced photodetector (BPD), high-order-sideband-induced measurement error can be ideally eliminated. As a result, accurate responses of the ODUT can be achieved without complex post-signal processing. A proof-of-concept experiment is carried out. The magnitude and phase responses of a fiber Bragg grating (FBG) measured by the proposed OVNA with different modulation indices are superimposed, showing that the high-order-sideband-induced measurement error is effectively removed.
Zhang, Jiamei; Wang, Yan; Chen, Xiaoqin
2016-04-01
To evaluate and compare refractive outcomes of moderate- and high-astigmatism correction after wavefront-guided laser in situ keratomileusis (LASIK) and small-incision lenticule extraction (SMILE). This comparative study enrolled a total of 64 eyes that had undergone SMILE (42 eyes) and wavefront-guided LASIK (22 eyes). Preoperative cylindrical diopters were ≤-2.25 D in moderate- and >-2.25 D in high-astigmatism subgroups. The refractive results were analyzed based on the Alpins vector method that included target-induced astigmatism, surgically induced astigmatism, difference vector, correction index, index of success, magnitude of error, angle of error, and flattening index. All subjects completed the 3-month follow-up. No significant differences were found in the target-induced astigmatism, surgically induced astigmatism, and difference vector between SMILE and wavefront-guided LASIK. However, the average angle of error value was -1.00 ± 3.16 after wavefront-guided LASIK and 1.22 ± 3.85 after SMILE with statistical significance (P < 0.05). The absolute angle of error value was statistically correlated with difference vector and index of success after both procedures. In the moderate-astigmatism group, correction index was 1.04 ± 0.15 after wavefront-guided LASIK and 0.88 ± 0.15 after SMILE (P < 0.05). However, in the high-astigmatism group, correction index was 0.87 ± 0.13 after wavefront-guided LASIK and 0.88 ± 0.12 after SMILE (P = 0.889). Both procedures showed preferable outcomes in the correction of moderate and high astigmatism. However, high astigmatism was undercorrected after both procedures. Axial error of astigmatic correction may be one of the potential factors for the undercorrection.
High-accurate optical vector analysis based on optical single-sideband modulation
NASA Astrophysics Data System (ADS)
Xue, Min; Pan, Shilong
2016-11-01
Most of the efforts devoted to the area of optical communications were on the improvement of the optical spectral efficiency. Varies innovative optical devices are thus developed to finely manipulate the optical spectrum. Knowing the spectral responses of these devices, including the magnitude, phase and polarization responses, is of great importance for their fabrication and application. To achieve high-resolution characterization, optical vector analyzers (OVAs) based on optical single-sideband (OSSB) modulation have been proposed and developed. Benefiting from the mature and highresolution microwave technologies, the OSSB-based OVA can potentially achieve a resolution of sub-Hz. However, the accuracy is restricted by the measurement errors induced by the unwanted first-order sideband and the high-order sidebands in the OSSB signal, since electrical-to-optical conversion and optical-to-electrical conversion are essentially required to achieve high-resolution frequency sweeping and extract the magnitude and phase information in the electrical domain. Recently, great efforts have been devoted to improve the accuracy of the OSSB-based OVA. In this paper, the influence of the unwanted-sideband induced measurement errors and techniques for implementing high-accurate OSSB-based OVAs are discussed.
Feizi, Sepehr; Delfazayebaher, Siamak; Ownagh, Vahid; Sadeghpour, Fatemeh
To evaluate the agreement between total corneal astigmatism calculated by vector summation of anterior and posterior corneal astigmatism (TCA Vec ) and total corneal astigmatism measured by ray tracing (TCA Ray ). This study enrolled a total of 204 right eyes of 204 normal subjects. The eyes were measured using a Galilei double Scheimpflug analyzer. The measured parameters included simulated keratometric astigmatism using the keratometric index, anterior corneal astigmatism using the corneal refractive index, posterior corneal astigmatism, and TCA Ray . TCA Vec was derived by vector summation of the astigmatism on the anterior and posterior corneal surfaces. The magnitudes and axes of TCA Vec and TCA Ray were compared. The Pearson correlation coefficient and Bland-Altman plots were used to assess the relationship and agreement between TCA Vec and TCA Ray , respectively. The mean TCA Vec and TCA Ray magnitudes were 0.76±0.57D and 1.00±0.78D, respectively (P<0.001). The mean axis orientations were 85.12±30.26° and 89.67±36.76°, respectively (P=0.02). Strong correlations were found between the TCA Vec and TCA Ray magnitudes (r=0.96, P<0.001). Moderate associations were observed between the TCA Vec and TCA Ray axes (r=0.75, P<0.001). Bland-Altman plots produced the 95% limits of agreement for the TCA Vec and TCA Ray magnitudes from -0.33 to 0.82D. The 95% limits of agreement between the TCA Vec and TCA Ray axes was -43.0 to 52.1°. The magnitudes and axes of astigmatisms measured by the vector summation and ray tracing methods cannot be used interchangeably. There was a systematic error between the TCA Vec and TCA Ray magnitudes. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Method and system for operating an electric motor
Gallegos-Lopez, Gabriel; Hiti, Silva; Perisic, Milun
2013-01-22
Methods and systems for operating an electric motor having a plurality of windings with an inverter having a plurality of switches coupled to a voltage source are provided. A first plurality of switching vectors is applied to the plurality of switches. The first plurality of switching vectors includes a first ratio of first magnitude switching vectors to second magnitude switching vectors. A direct current (DC) current associated with the voltage source is monitored during the applying of the first plurality of switching vectors to the plurality of switches. A second ratio of the first magnitude switching vectors to the second magnitude switching vectors is selected based on the monitoring of the DC current associated with the voltage source. A second plurality of switching vectors is applied to the plurality of switches. The second plurality of switching vectors includes the second ratio of the first magnitude switching vectors to the second magnitude switching vectors.
Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel
2015-08-01
Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.
Evaluation of the navigation performance of shipboard-VTOL-landing guidance systems
NASA Technical Reports Server (NTRS)
Mcgee, L. A.; Paulk, C. H., Jr.; Steck, S. A.; Schmidt, S. F.; Merz, A. W.
1979-01-01
The objective of this study was to explore the performance of a VTOL aircraft landing approach navigation system that receives data (1) from either a microwave scanning beam (MSB) or a radar-transponder (R-T) landing guidance system, and (2) information data-linked from an aviation facility ship. State-of-the-art low-cost-aided inertial techniques and variable gain filters were used in the assumed navigation system. Compensation for ship motion was accomplished by a landing pad deviation vector concept that is a measure of the landing pad's deviation from its calm sea location. The results show that the landing guidance concepts were successful in meeting all of the current Navy navigation error specifications, provided that vector magnitude of the allowable error, rather than the error in each axis, is a permissible interpretation of acceptable performance. The success of these concepts, however, is strongly dependent on the distance measuring equipment bias. In addition, the 'best possible' closed-loop tracking performance achievable with the assumed point-mass VTOL aircraft guidance concept is demonstrated.
Relativistic Transverse Gravitational Redshift
NASA Astrophysics Data System (ADS)
Mayer, A. F.
2012-12-01
The parametrized post-Newtonian (PPN) formalism is a tool for quantitative analysis of the weak gravitational field based on the field equations of general relativity. This formalism and its ten parameters provide the practical theoretical foundation for the evaluation of empirical data produced by space-based missions designed to map and better understand the gravitational field (e.g., GRAIL, GRACE, GOCE). Accordingly, mission data is interpreted in the context of the canonical PPN formalism; unexpected, anomalous data are explained as similarly unexpected but apparently real physical phenomena, which may be characterized as ``gravitational anomalies," or by various sources contributing to the total error budget. Another possibility, which is typically not considered, is a small modeling error in canonical general relativity. The concept of the idealized point-mass spherical equipotential surface, which originates with Newton's law of gravity, is preserved in Einstein's synthesis of special relativity with accelerated reference frames in the form of the field equations. It was not previously realized that the fundamental principles of relativity invalidate this concept and with it the idea that the gravitational field is conservative (i.e., zero net work is done on any closed path). The ideal radial free fall of a material body from arbitrarily-large range to a point on such an equipotential surface (S) determines a unique escape-velocity vector of magnitude v collinear to the acceleration vector of magnitude g at this point. For two such points on S separated by angle dφ , the Equivalence Principle implies distinct reference frames experiencing inertial acceleration of identical magnitude g in different directions in space. The complete equivalence of these inertially-accelerated frames to their analogous frames at rest on S requires evaluation at instantaneous velocity v relative to a local inertial observer. Because these velocity vectors are not parallel, a symmetric energy potential exists between the frames that is quantified by the instantaneous Δ {v} = v\\cdot{d}φ between them; in order for either frame to become indistinguishable from the other, such that their respective velocity and acceleration vectors are parallel, a change in velocity is required. While the qualitative features of general relativity imply this phenomenon (i.e., a symmetric potential difference between two points on a Newtonian `equipotential surface' that is similar to a friction effect), it is not predicted by the field equations due to a modeling error concerning time. This is an error of omission; time has fundamental geometric properties implied by the principles of relativity that are not reflected in the field equations. Where b is the radius and g is the gravitational acceleration characterizing a spherical geoid S of an ideal point-source gravitational field, an elegant derivation that rests on first principles shows that for two points at rest on S separated by a distance d << b, a symmetric relativistic redshift exists between these points of magnitude z = gd2/bc^2, which over 1 km at Earth sea level yields z ˜{10-17}. It can be tested with a variety of methods, in particular laser interferometry. A more sophisticated derivation yields a considerably more complex predictive formula for any two points in a gravitational field.
Applying EVM to Satellite on Ground and In-Orbit Testing - Better Data in Less Time
NASA Technical Reports Server (NTRS)
Peters, Robert; Lebbink, Elizabeth-Klein; Lee, Victor; Model, Josh; Wezalis, Robert; Taylor, John
2008-01-01
Using Error Vector Magnitude (EVM) in satellite integration and test allows rapid verification of the Bit Error Rate (BER) performance of a satellite link and is particularly well suited to measurement of low bit rate satellite links where it can result in a major reduction in test time (about 3 weeks per satellite for the Geosynchronous Operational Environmental Satellite [GOES] satellites during ground test) and can provide diagnostic information. Empirical techniques developed to predict BER performance from EVM measurements and lessons learned about applying these techniques during GOES N, O, and P integration test and post launch testing, are discussed.
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
A Worksheet to Enhance Students’ Conceptual Understanding in Vector Components
NASA Astrophysics Data System (ADS)
Wutchana, Umporn; Emarat, Narumon
2017-09-01
With and without physical context, we explored 59 undergraduate students’conceptual and procedural understanding of vector components using both open ended problems and multiple choice items designed based on research instruments used in physics education research. The results showed that a number of students produce errors and revealed alternative conceptions especially when asked to draw graphical form of vector components. It indicated that most of them did not develop a strong foundation of understanding in vector components and could not apply those concepts to such problems with physical context. Based on the findings, we designed a worksheet to enhance the students’ conceptual understanding in vector components. The worksheet is composed of three parts which help students to construct their own understanding of definition, graphical form, and magnitude of vector components. To validate the worksheet, focus group discussions of 3 and 10 graduate students (science in-service teachers) had been conducted. The modified worksheet was then distributed to 41 grade 9 students in a science class. The students spent approximately 50 minutes to complete the worksheet. They sketched and measured vectors and its components and compared with the trigonometry ratio to condense the concepts of vector components. After completing the worksheet, their conceptual model had been verified. 83% of them constructed the correct model of vector components.
Ultra-broad band, low power, highly efficient coherent wavelength conversion in quantum dot SOA.
Contestabile, G; Yoshida, Y; Maruta, A; Kitayama, K
2012-12-03
We report broadband, all-optical wavelength conversion over 100 nm span, in full S- and C-band, with positive conversion efficiency with low optical input power exploiting dual pump Four-Wave-Mixing in a Quantum Dot Semiconductor Optical Amplifier (QD-SOA). We also demonstrate by Error Vector Magnitude analysis the full transparency of the conversion scheme for coherent modulation formats (QPSK, 8-PSK, 16-QAM, OFDM-16QAM) in the whole C-band.
Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon
2010-10-01
An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
Accelerating 4D flow MRI by exploiting vector field divergence regularization.
Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian
2016-01-01
To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.
Regularized estimation of Euler pole parameters
NASA Astrophysics Data System (ADS)
Aktuğ, Bahadir; Yildirim, Ömer
2013-07-01
Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.
NASA Astrophysics Data System (ADS)
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
A component compensation method for magnetic interferential field
NASA Astrophysics Data System (ADS)
Zhang, Qi; Wan, Chengbiao; Pan, Mengchun; Liu, Zhongyan; Sun, Xiaoyong
2017-04-01
A new component searching with scalar restriction method (CSSRM) is proposed for magnetometer to compensate magnetic interferential field caused by ferromagnetic material of platform and improve measurement performance. In CSSRM, the objection function for parameter estimation is to minimize magnetic field (components and magnitude) difference between its measurement value and reference value. Two scalar compensation method is compared with CSSRM and the simulation results indicate that CSSRM can estimate all interferential parameters and external magnetic field vector with high accuracy. The magnetic field magnitude and components, compensated with CSSRM, coincide with true value very well. Experiment is carried out for a tri-axial fluxgate magnetometer, mounted in a measurement system with inertial sensors together. After compensation, error standard deviation of both magnetic field components and magnitude are reduced from more than thousands nT to less than 20 nT. It suggests that CSSRM provides an effective way to improve performance of magnetic interferential field compensation.
The Alignment of the Mean Wind and Stress Vectors in the Unstable Surface Layer
NASA Astrophysics Data System (ADS)
Bernardes, M.; Dias, N. L.
2010-01-01
A significant non-alignment between the mean horizontal wind vector and the stress vector was observed for turbulence measurements both above the water surface of a large lake, and over a land surface (soybean crop). Possible causes for this discrepancy such as flow distortion, averaging times and the procedure used for extracting the turbulent fluctuations (low-pass filtering and filter widths etc.), were dismissed after a detailed analysis. Minimum averaging times always less than 30 min were established by calculating ogives, and error bounds for the turbulent stresses were derived with three different approaches, based on integral time scales (first-crossing and lag-window estimates) and on a bootstrap technique. It was found that the mean absolute value of the angle between the mean wind and stress vectors is highly related to atmospheric stability, with the non-alignment increasing distinctively with increasing instability. Given a coordinate rotation that aligns the mean wind with the x direction, this behaviour can be explained by the growth of the relative error of the u- w component with instability. As a result, under more unstable conditions the u- w and the v- w components become of the same order of magnitude, and the local stress vector gives the impression of being non-aligned with the mean wind vector. The relative error of the v- w component is large enough to make it undistinguishable from zero throughout the range of stabilities. Therefore, the standard assumptions of Monin-Obukhov similarity theory hold: it is fair to assume that the v- w stress component is actually zero, and that the non-alignment is a purely statistical effect. An analysis of the dimensionless budgets of the u- w and the v- w components confirms this interpretation, with both shear and buoyant production of u- w decreasing with increasing instability. In the v- w budget, shear production is zero by definition, while buoyancy displays very low-intensity fluctuations around zero. As local free convection is approached, the turbulence becomes effectively axisymetrical, and a practical limit seems to exist beyond which it is not possible to measure the u- w component accurately.
Impact of Orbit Position Errors on Future Satellite Gravity Models
NASA Astrophysics Data System (ADS)
Encarnacao, J.; Ditmar, P.; Klees, R.
2015-12-01
We present the results of a study of the impact of orbit positioning noise (OPN) caused by incomplete knowledge of the Earth's gravity field on gravity models estimated from satellite gravity data. The OPN is simulated as the difference between two sets of orbits integrated on the basis of different static gravity field models. The OPN is propagated into ll-SST data, here computed as averaged inter-satellite accelerations projected onto the Line of Sight (LoS) vector between the two satellites. We consider the cartwheel formation (CF), pendulum formation (PF), and trailing formation (TF) as they produce a different dominant orientation of the LoS vector. Given the polar orbits of the formations, the LoS vector is mainly aligned with the North-South direction in the TF, with the East-West direction in the PF (i.e. no along-track offset), and contains a radial component in the CF. An analytical analysis predicts that the CF suffers from a very high sensitivity to the OPN. This is a fundamental characteristic of this formation, which results from the amplification of this noise by diagonal components of the gravity gradient tensor (defined in the local frame) during the propagation into satellite gravity data. In contrast, the OPN in the data from PF and TF is only scaled by off-diagonal gravity gradient components, which are much smaller than the diagonal tensor components. A numerical analysis shows that the effect of the OPN is similar in the data collected by the TF and the PF. The amplification of the OPN errors for the CF leads to errors in the gravity model that are three orders of magnitude larger than those in case of the PF. This means that any implementation of the CF will most likely produce data with relatively low quality since this error dominates the error budget, especially at low frequencies. This is particularly critical for future gravimetric missions that will be equipped with highly accurate ranging sensors.
Student difficulties regarding symbolic and graphical representations of vector fields
NASA Astrophysics Data System (ADS)
Bollen, Laurens; van Kampen, Paul; Baily, Charles; Kelly, Mossy; De Cock, Mieke
2017-12-01
The ability to switch between various representations is an invaluable problem-solving skill in physics. In addition, research has shown that using multiple representations can greatly enhance a person's understanding of mathematical and physical concepts. This paper describes a study of student difficulties regarding interpreting, constructing, and switching between representations of vector fields, using both qualitative and quantitative methods. We first identified to what extent students are fluent with the use of field vector plots, field line diagrams, and symbolic expressions of vector fields by conducting individual student interviews and analyzing in-class student activities. Based on those findings, we designed the Vector Field Representations test, a free response assessment tool that has been given to 196 second- and third-year physics, mathematics, and engineering students from four different universities. From the obtained results we gained a comprehensive overview of typical errors that students make when switching between vector field representations. In addition, the study allowed us to determine the relative prevalence of the observed difficulties. Although the results varied greatly between institutions, a general trend revealed that many students struggle with vector addition, fail to recognize the field line density as an indication of the magnitude of the field, confuse characteristics of field lines and equipotential lines, and do not choose the appropriate coordinate system when writing out mathematical expressions of vector fields.
NASA Astrophysics Data System (ADS)
Ochoa Gutierrez, L. H.; Niño Vasquez, L. F.; Vargas-Jimenez, C. A.
2012-12-01
To minimize adverse effects originated by high magnitude earthquakes, early warning has become a powerful tool to anticipate a seismic wave arrival to an specific location and lets to bring people and government agencies opportune information to initiate a fast response. To do this, a very fast and accurate characterization of the event must be done but this process is often made using seismograms recorded in at least 4 stations where processing time is usually greater than the wave travel time to the interest area, mainly in coarse networks. A faster process can be done if only one three component seismic station is used that is the closest unsaturated station respect to the epicenter. Here we present a Support Vector Regression algorithm which calculates Magnitude and Epicentral Distance using only 5 seconds of signal since P wave onset. This algorithm was trained with 36 records of historical earthquakes where the input were regression parameters of an exponential function estimated by least squares, corresponding to the waveform envelope and the maximum value of the observed waveform for each component in one single station. A 10 fold Cross Validation was applied for a Normalized Polynomial Kernel obtaining the mean absolute error for different exponents and complexity parameters. Magnitude could be estimated with 0.16 of mean absolute error and the distance with an error of 7.5 km for distances within 60 to 120 km. This kind of algorithm is easy to implement in hardware and can be used directly in the field station to make possible the broadcast of estimations of this values to generate fast decisions at seismological control centers, increasing the possibility to have an effective reactiontribute and Descriptors calculator for SVR model training and test
Optimal four-impulse rendezvous between coplanar elliptical orbits
NASA Astrophysics Data System (ADS)
Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun
2011-04-01
Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.
Wheel speed management control system for spacecraft
NASA Technical Reports Server (NTRS)
Goodzeit, Neil E. (Inventor); Linder, David M. (Inventor)
1991-01-01
A spacecraft attitude control system uses at least four reaction wheels. In order to minimize reaction wheel speed and therefore power, a wheel speed management system is provided. The management system monitors the wheel speeds and generates a wheel speed error vector. The error vector is integrated, and the error vector and its integral are combined to form a correction vector. The correction vector is summed with the attitude control torque command signals for driving the reaction wheels.
Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors
1989-08-21
Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one
Yang, Fan; Zeng, Xiaoping; Mao, Haiwei; Jian, Xin; Tan, Xiaoheng; Du, Derong
2018-01-15
The high demand for multimedia applications in environmental monitoring, invasion detection, and disaster aid has led to the rise of wireless sensor network (WSN). With the increase of reliability and diversity of information streams, the higher requirements on throughput and quality of service (QoS) have been put forward in data transmission between two sensor nodes. However, lower spectral efficiency becomes a bottleneck in non-line-of-sight (NLOS) transmission of WSN. This paper proposes a novel nondata-aided error vector magnitude based adaptive modulation (NDA-EVM-AM) to solve the problem. NDA-EVM is considered as a new metric to evaluate the quality of NLOS link for adaptive modulation in WSN. By modeling the NLOS scenario as the η - μ fading channel, a closed-form expression for the NDA-EVM of multilevel quadrature amplitude modulation (MQAM) signals over the η - μ fading channel is derived, and the relationship between SER and NDA-EVM is also formulated. Based on these results, NDA-EVM state machine is designed for adaptation strategy. The algorithmic complexity of NDA-EVM-AM is analyzed and the outage capacity of NDA-EVM-AM in an NLOS scenario is also given. The performances of NDA-EVM-AM are compared by simulation, and the results show that NDA-EVM-AM is an effective technique to be used in the NLOS scenarios of WSN. This technique can accurately reflect the channel variations and efficiently adjust modulation order to better match the channel conditions, hence, obtaining better performance in average spectral efficiency.
Testing stellar proper motions of TGAS stars using data from the HSOY, UCAC5 and PMA catalogues
NASA Astrophysics Data System (ADS)
Fedorov, P. N.; Akhmetov, V. S.; Velichko, A. B.
2018-05-01
We analyse the stellar proper motions from the Tycho-Gaia Astrometric Solution (TGAS) and those from the ground-based HSOY, UCAC5 and PMA catalogues derived by combining them with Gaia DR1 space data. Assuming that systematic differences in stellar proper motions of the two catalogues are caused by a mutual rigid-body rotation of the reference catalogue systems, we analyse components of the rotation vector between the systems. We found that the ωy component of the rotation vector is ˜1.5 mas yr-1 and it depends non-linearly on stellar magnitude for the objects of 9.5-11.5 mag used in all three comparisons of the catalogues HSOY, UCAC5 and PMA with respect to TGAS. We found that the Tycho-2 stars in TGAS appeared to have an inexplicable dependence of proper motion on stellar magnitude. We showed that the proper motions of the TGAS stars derived using AGIS differ from those obtained by the conventional (classical) method. Moreover, the application of both methods has not revealed such a difference between the proper motions of the Hipparcos and TGAS stars. An analysis of the systematic differences between the proper motions of the TGAS stars derived by the classical method and the proper motions of the HSOY, UCAC5 and PMA stars shows that the ωy component here does not depend on the magnitude. This indicates unambiguously that there is a magnitude error in the proper motions of the Tycho-2 stars derived with the AGIS.
Error analysis of 3D-PTV through unsteady interfaces
NASA Astrophysics Data System (ADS)
Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier
2018-03-01
The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.
2016-01-01
Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul
2014-01-01
Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr
Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.
2014-01-01
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind
2014-08-15
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less
A study of attitude control concepts for precision-pointing non-rigid spacecraft
NASA Technical Reports Server (NTRS)
Likins, P. W.
1975-01-01
Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.
Chan, Tommy C Y; Cheng, George P M; Wang, Zheng; Tham, Clement C Y; Woo, Victor C P; Jhanji, Vishal
2015-08-01
To evaluate the outcomes of femtosecond-assisted arcuate keratotomy combined with cataract surgery in eyes with low to moderate corneal astigmatism. Retrospective, interventional case series. This study included patients who underwent combined femtosecond-assisted phacoemulsification and arcuate keratotomy between March 2013 and August 2013. Keratometric astigmatism was evaluated before and 2 months after the surgery. Vector analysis of the astigmatic changes was performed using the Alpins method. Overall, 54 eyes of 54 patients (18 male and 36 female; mean age, 68.8 ± 11.4 years) were included. The mean preoperative (target-induced astigmatism) and postoperative astigmatism was 1.33 ± 0.57 diopters (D) and 0.87 ± 0.56 D, respectively (P < .001). The magnitude of error (difference between surgically induced and target-induced astigmatism) (-0.13 ± 0.68 D), as well as the correction index (ratio of surgically induced and target-induced astigmatism) (0.86 ± 0.52), demonstrated slight undercorrection. The angle of error was very close to 0, indicating no significant systematic error of misaligned treatment. However, the absolute angle of error showed a less favorable range (17.5 ± 19.2 degrees), suggesting variable factors such as healing or alignment at an individual level. There were no intraoperative or postoperative complications. Combined phacoemulsification with arcuate keratotomy using femtosecond laser appears to be a relatively easy and safe means for management of low to moderate corneal astigmatism in cataract surgery candidates. Misalignment at an individual level can reduce its effectiveness. This issue remains to be elucidated in future studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Chebabhi, Ali; Fellah, Mohammed Karim; Kessal, Abdelhalim; Benkhoris, Mohamed F
2016-07-01
In this paper is proposed a new balancing three-level three dimensional space vector modulation (B3L-3DSVM) strategy which uses a redundant voltage vectors to realize precise control and high-performance for a three phase three-level four-leg neutral point clamped (NPC) inverter based Shunt Active Power Filter (SAPF) for eliminate the source currents harmonics, reduce the magnitude of neutral wire current (eliminate the zero-sequence current produced by single-phase nonlinear loads), and to compensate the reactive power in the three-phase four-wire electrical networks. This strategy is proposed in order to gate switching pulses generation, dc bus voltage capacitors balancing (conserve equal voltage of the two dc bus capacitors), and to switching frequency reduced and fixed of inverter switches in same times. A Nonlinear Back Stepping Controllers (NBSC) are used for regulated the dc bus voltage capacitors and the SAPF injected currents to robustness, stabilizing the system and to improve the response and to eliminate the overshoot and undershoot of traditional PI (Proportional-Integral). Conventional three-level three dimensional space vector modulation (C3L-3DSVM) and B3L-3DSVM are calculated and compared in terms of error between the two dc bus voltage capacitors, SAPF output voltages and THDv, THDi of source currents, magnitude of source neutral wire current, and the reactive power compensation under unbalanced single phase nonlinear loads. The success, robustness, and the effectiveness of the proposed control strategies are demonstrated through simulation using Sim Power Systems and S-Function of MATLAB/SIMULINK. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Numerical relativity waveform surrogate model for generically precessing binary black hole mergers
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla
2017-07-01
A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.
Quartz crystal resonator g sensitivity measurement methods and recent results
NASA Astrophysics Data System (ADS)
Driscoll, M. M.
1990-09-01
A technique for accurate measurements of quartz crystal resonator vibration sensitivity is described. The technique utilizes a crystal oscillator circuit in which a prescribed length of coaxial cable is used to connect the resonator to the oscillator sustaining stage. A method is provided for determination and removal of measurement errors normally introduced as a result of cable vibration. In addition to oscillator-type measurements, it is also possible to perform similar vibration sensitivity measurements using a synthesized signal generator with the resonator installed in a passive phase bridge. Test results are reported for 40 and 50 MHz, fifth overtone AT-cut, and third overtone SC-cut crystals. Acceleration sensitivity (gamma vector) values for the SC-cut resonators were typically four times smaller (5 x 10 to the -10th/g) than for the AT-cut units. However, smaller unit-to-unit gamma vector magnitude variation was exhibited by the AT-cut resonators.
A new technique for the measurement of surface shear stress vectors using liquid crystal coatings
NASA Technical Reports Server (NTRS)
Reda, Daniel C.; Muratore, J. J., Jr.
1994-01-01
Research has recently shown that liquid crystal coating (LCC) color-change response to shear depends on both shear stress magnitude and direction. Additional research was thus conducted to extend the LCC method from a flow-visualization tool to a surface shear stress vector measurement technique. A shear-sensitive LCC was applied to a planar test surface and illuminated by white light from the normal direction. A fiber optic probe was used to capture light scattered by the LCC from a point on the centerline of a turbulent, tangential-jet flow. Both the relative shear stress magnitude and the relative in-plane view angle between the sensor and the centerline shear vector were systematically varied. A spectrophotometer was used to obtain scattered-light spectra which were used to quantify the LCC color (dominant wavelength) as a function of shear stress magnitude and direction. At any fixed shear stress magnitude, the minimum dominant wavelength was measured when the shear vector was aligned with and directed away from the observer; changes in the relative in-plane view angle to either side of this vector/observer aligned position resulted in symmetric Gaussian increases in measured dominant wavelength. Based on these results, a vector measurement methodology, involving multiple oblique-view observations of the test surface, was formulated. Under present test conditions, the measurement resolution of this technique was found to be +/- 1 deg for vector orientations and +/- 5% for vector magnitudes. An approach t o extend the present methodology to full-surface applications is proposed.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1992-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1995-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Chan, Tommy C Y; Wang, Yan; Ng, Alex L K; Zhang, Jiamei; Yu, Marco C Y; Jhanji, Vishal; Cheng, George P M
2018-06-13
To compare the astigmatic correction in high myopic astigmatism between small-incision lenticule extraction and laser in situ keratomileusis (LASIK) using vector analysis. Hong Kong Laser Eye Center, Hong Kong. Retrospective case series. Patients who had correction of myopic astigmatism of 3.0 diopters (D) or more and had either small-incision lenticule extraction or femtosecond laser-assisted LASIK were included. Only the left eye was included for analysis. Visual and refractive results were presented and compared between groups. The study comprised 105 patients (40 eyes in the small-incision lenticule extraction group and 65 eyes in the femtosecond laser-assisted LASIK group.) The mean preoperative manifest cylinder was -3.42 D ± 0.55 (SD) in the small-incision lenticule extraction group and -3.47 ± 0.49 D in the LASIK group (P = .655). At 3 months, there was no significant between-group difference in uncorrected distance visual acuity (P = .915) and manifest spherical equivalent (P = .145). Ninety percent and 95.4% of eyes were within ± 0.5 D of the attempted cylindrical correction for the small-incision lenticule extraction and LASIK group, respectively (P = .423). Vector analysis showed comparable target-induced astigmatism (P = .709), surgically induced astigmatism vector (P = .449), difference vector (P = .335), and magnitude of error (P = .413) between groups. The absolute angle of error was 1.88 ± 2.25 degrees in the small-incision lenticule extraction group and 1.37 ± 1.58 degrees in the LASIK group (P = .217). Small-incision lenticule extraction offered astigmatic correction comparable to LASIK in eyes with high myopic astigmatism. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Akhmetov, Volodymyr S.; Fedorov, Peter N.; Velichko, Anna B.
2018-04-01
We combined the data from the Gaia DR1 and Two-Micron All Sky Survey (2MASS) catalogues in order to derive the absolute proper motions more than 420 million stars distributed all over the sky in the stellar magnitude range 8 mag < G < 21 mag (Gaia magnitude). To eliminate the systematic zonal errors in position of 2MASS catalogue objects, the 2-dimensional median filter was used. The PMA system of proper motion has been obtained by direct link to 1.6 millions extragalactic sources. The short analysis of the absolute proper motion of the PMA stars Catalogue is presented in this work. From a comparison of this data with same stars from the TGAS, UCAC4 and PPMXL catalogues, the equatorial components of the mutual rotation vector of these coordinate systems are determined.
A Fast Vector Radiative Transfer Model for Atmospheric and Oceanic Remote Sensing
NASA Astrophysics Data System (ADS)
Ding, J.; Yang, P.; King, M. D.; Platnick, S. E.; Meyer, K.
2017-12-01
A fast vector radiative transfer model is developed in support of atmospheric and oceanic remote sensing. This model is capable of simulating the Stokes vector observed at the top of the atmosphere (TOA) and the terrestrial surface by considering absorption, scattering, and emission. The gas absorption is parameterized in terms of atmospheric gas concentrations, temperature, and pressure. The parameterization scheme combines a regression method and the correlated-K distribution method, and can easily integrate with multiple scattering computations. The approach is more than four orders of magnitude faster than a line-by-line radiative transfer model with errors less than 0.5% in terms of transmissivity. A two-component approach is utilized to solve the vector radiative transfer equation (VRTE). The VRTE solver separates the phase matrices of aerosol and cloud into forward and diffuse parts and thus the solution is also separated. The forward solution can be expressed by a semi-analytical equation based on the small-angle approximation, and serves as the source of the diffuse part. The diffuse part is solved by the adding-doubling method. The adding-doubling implementation is computationally efficient because the diffuse component needs much fewer spherical function expansion terms. The simulated Stokes vector at both the TOA and the surface have comparable accuracy compared with the counterparts based on numerically rigorous methods.
Permutation modulation for quantization and information reconciliation in CV-QKD systems
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
2017-08-01
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.
Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian
2017-01-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469
Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen
2017-06-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
Field-programmable gate array implementation of an all-digital IEEE 802.15.4-compliant transceiver
NASA Astrophysics Data System (ADS)
Cornetta, Gianluca; Touhafi, Abdellah; Santos, David J.; Vázquez, José M.
2010-12-01
An architecture for a low-cost, low-complexity digital transceiver is presented in this article. The proposed architecture targets the IEEE 802.15.4 standard for short-range wireless personal area networks and has been implemented as a synthesisable VHDL register transfer level description. The system has been evaluated and tested using a Xilinx 90 nm Virtex-4 field-programmable gate array as the target technology. Bit error rate (BER) and error vector magnitude (EVM) have been used as the figures of merit for modem performance. Simulations show that the recommended minimum BER is achieved at E b/N 0 = 8.7 dB, whereas the EVM is 19.5%. The implemented device occupies 10% of the target FPGA and has a normalised maximum power consumption of 44 mW in transmit mode and 53 mW in receiver mode.
Fast higher-order MR image reconstruction using singular-vector separation.
Wilm, Bertram J; Barmet, Christoph; Pruessmann, Klaas P
2012-07-01
Medical resonance imaging (MRI) conventionally relies on spatially linear gradient fields for image encoding. However, in practice various sources of nonlinear fields can perturb the encoding process and give rise to artifacts unless they are suitably addressed at the reconstruction level. Accounting for field perturbations that are neither linear in space nor constant over time, i.e., dynamic higher-order fields, is particularly challenging. It was previously shown to be feasible with conjugate-gradient iteration. However, so far this approach has been relatively slow due to the need to carry out explicit matrix-vector multiplications in each cycle. In this work, it is proposed to accelerate higher-order reconstruction by expanding the encoding matrix such that fast Fourier transform can be employed for more efficient matrix-vector computation. The underlying principle is to represent the perturbing terms as sums of separable functions of space and time. Compact representations with this property are found by singular-vector analysis of the perturbing matrix. Guidelines for balancing the accuracy and speed of the resulting algorithm are derived by error propagation analysis. The proposed technique is demonstrated for the case of higher-order field perturbations due to eddy currents caused by diffusion weighting. In this example, image reconstruction was accelerated by two orders of magnitude.
Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging
Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C.
2017-01-01
Background To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Methods Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Results Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. Conclusions A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed. PMID:28516049
Parallel PWMs Based Fully Digital Transmitter with Wide Carrier Frequency Range
Zhou, Bo; Zhang, Kun; Zhou, Wenbiao; Zhang, Yanjun; Liu, Dake
2013-01-01
The carrier-frequency (CF) and intermediate-frequency (IF) pulse-width modulators (PWMs) based on delay lines are proposed, where baseband signals are conveyed by both positions and pulse widths or densities of the carrier clock. By combining IF-PWM and precorrected CF-PWM, a fully digital transmitter with unit-delay autocalibration is implemented in 180 nm CMOS for high reconfiguration. The proposed architecture achieves wide CF range of 2 M–1 GHz, high power efficiency of 70%, and low error vector magnitude (EVM) of 3%, with spectrum purity of 20 dB optimized in comparison to the existing designs. PMID:24223503
A polarization-division multiplexing SSB-OFDM system with beat interference cancellation receivers
NASA Astrophysics Data System (ADS)
Yang, Peiling; Ma, Jianxin; Zhang, Junyi
2018-06-01
In this paper, we have proposed a polarization-division multiplexing (PDM) single-sideband optical orthogonal frequency division multiplexing (SSB-OOFDM) scheme with signal-signal beat interference cancellation receivers with balanced detection (ICRBD). This system can double channel capacity and improve spectrum efficiency (SE) with the reduced guard band (GB) due to the PDM. Multiple input multiple output (MIMO) technique is used to solve polarization mode dispersion (PMD) associated with channel estimation and equalization. By simulation, we demonstrate the efficacy of the proposed technique for a 2 ×40 Gbit/s 16-QAM SSB-PDM-OOFDM system according to the error vector magnitude (EVM) and the constellation diagrams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less
Application of Polarization to the MODIS Aerosol Retrieval Over Land
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Remer, Lorraine R.; Kaufman, Yoram J.
2004-01-01
Reflectance measurements in the visible and infrared wavelengths, from the Moderate Resolution Imaging Spectroradiometer (MODIS), are used to derive aerosol optical thicknesses (AOT) and aerosol properties over land surfaces. The measured spectral reflectance is compared with lookup tables, containing theoretical reflectance calculated by radiative transfer (RT) code. Specifically, this RT code calculates top of the atmosphere (TOA) intensities based on a scalar treatment of radiation, neglecting the effects of polarization. In the red and near infrared (NIR) wavelengths the use of the scalar RT code is of sufficient accuracy to model TOA reflectance. However, in the blue, molecular and aerosol scattering dominate the TOA signal. Here, polarization effects can be large, and should be included in the lookup table derivation. Using a RT code that allows for both vector and scalar calculations, we examine the reflectance differences at the TOA, with and without polarization. We find that the differences in blue channel TOA reflectance (vector - scalar) may reach values of 0.01 or greater, depending on the sun/surface/sensor scattering geometry. Reflectance errors of this magnitude translate to AOT differences of 0.1, which is a very large error, especially when the actual AOT is low. As a result of this study, the next version of aerosol retrieval from MODIS over land will include polarization.
Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.
ERIC Educational Resources Information Center
Taghva, Kazem; And Others
1996-01-01
Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)
Estimation of attitude sensor timetag biases
NASA Technical Reports Server (NTRS)
Sedlak, J.
1995-01-01
This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.
Novel technique for ST-T interval characterization in patients with acute myocardial ischemia.
Correa, Raúl; Arini, Pedro David; Correa, Lorena Sabrina; Valentinuzzi, Max; Laciar, Eric
2014-07-01
The novel signal processing techniques have allowed and improved the use of vectorcardiography (VCG) to diagnose and characterize myocardial ischemia. Herein, we studied vectorcardiographic dynamic changes of ventricular repolarization in 80 patients before (control) and during Percutaneous Transluminal Coronary Angioplasty (PTCA). We propose four vectorcardiographic ST-T parameters, i.e., (a) ST Vector Magnitude Area (aSTVM); (b) T-wave Vector Magnitude Area (aTVM); (c) ST-T Vector Magnitude Difference (ST-TVD), and (d) T-wave Vector Magnitude Difference (TVD). For comparison, the conventional ST-Change Vector Magnitude (STCVM) and Spatial Ventricular Gradient (SVG) were also calculated. Our results indicate that several vectorcardiographic parameters show significant differences (p-value<0.05) before starting and during PTCA. Statistical minute-by-minute PTCA comparison against the control situation showed that ischemic monitoring reached a sensitivity=90.5% and a specificity=92.6% at the 5th minute of the PTCA, when aSTVM and ST-TVD were used as classifiers. We conclude that the sensitivity and specificity for acute ischemia monitoring could be increased with the use of only two vectorcardiographic parameters. Hence, the proposed technique based on vectorcardiography could be used in addition to the conventional ST-T analysis for better monitoring of ischemic patients. Copyright © 2014 Elsevier Ltd. All rights reserved.
Biases in Time-Averaged Field and Paleosecular Variation Studies
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Constable, C.
2009-12-01
Challenges to constructing time-averaged field (TAF) and paleosecular variation (PSV) models of Earth’s magnetic field over million year time scales are the uneven geographical and temporal distribution of paleomagnetic data and the absence of full vector records of the magnetic field variability at any given site. Recent improvements in paleomagnetic data sets now allow regional assessment of the biases introduced by irregular temporal sampling and the absence of full vector information. We investigate these effects over the past few Myr for regions with large paleomagnetic data sets, where the TAF and/or PSV have been of previous interest (e.g., significant departures of the TAF from the field predicted by a geocentric axial dipole). We calculate the effects of excluding paleointensity data from TAF calculations, and find these to be small. For example, at Hawaii, we find that for the past 50 ka, estimates of the TAF direction are minimally affected if only paleodirectional data versus the full paleofield vector are used. We use resampling techniques to investigate biases incurred by the uneven temporal distribution. Key to the latter issue is temporal information on a site-by-site basis. At Hawaii, resampling of the paleodirectional data onto a uniform temporal distribution, assuming no error in the site ages, reduces the magnitude of the inclination anomaly for the Brunhes, Gauss and Matuyama epochs. However inclusion of age errors in the sampling procedure leads to TAF estimates that are close to those reported for the original data sets. We discuss the implications of our results for global field models.
Changes in the electric dipole vector of human serum albumin due to complexing with fatty acids.
Scheider, W; Dintzis, H M; Oncley, J L
1976-01-01
The magnitude of the electric dipole vector of human serum albumin, as measured by the dielectric increment of the isoionic solution, is found to be a sensitive, monotonic indicator of the number of moles (up to at least 5) of long chain fatty acid complexed. The sensitivity is about three times as great as it is in bovine albumin. New methods of analysis of the frequency dispersion of the dielectric constant were developed to ascertain if molecular shape changes also accompany the complexing with fatty acid. Direct two-component rotary diffusion constant analysis is found to be too strongly affected by cross modulation between small systematic errors and physically significant data components to be a reliable measure of structural modification. Multicomponent relaxation profiles are more useful as recognition patterns for structural comparisons, but the equations involved are ill-conditioned and solutions based on standard least-squares regression contain mathematical artifacts which mask the physically significant spectrum. By constraining the solution to non-negative coefficients, the magnitude of the artifacts is reduced to well below the magnitudes of the spectral components. Profiles calculated in this way show no evidence of significant dipole direction or molecular shape change as the albumin is complexed with 1 mol of fatty acid. In these experiments albumin was defatted by incubation with adipose tissue at physiological pH, which avoids passing the protein through the pH of the N-F transition usually required in defatting. Addition of fatty acid from soluion in small amounts of ethanol appears to form a complex indistinguishable from the "native" complex. PMID:6087
Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system
NASA Astrophysics Data System (ADS)
Duran, Ahmet; Tuncel, Mehmet
2016-10-01
It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.
Quartz crystal resonator g sensitivity measurement methods and recent results.
Driscoll, M M
1990-01-01
A technique for accurate measurements of quartz crystal resonator vibration sensitivity is described. The technique utilizes a crystal oscillator circuit in which a prescribed length of coaxial cable is used to connect the resonator to the oscillator sustaining stage. A method is provided for determination and removal of measurement errors normally introduced as a result of cable vibration. In addition to oscillator-type measurements, it is also possible to perform similar vibration sensitivity measurements using a synthesized signal generator with the resonator installed in a passive phase bridge. Test results are reported for 40 and 50 MHz, fifth overtone AT-cut, and third overtone SC-cut crystals. Acceleration sensitivity (gamma vector) values for the SC-cut resonators were typically four times smaller (5x10(-10) per g) than for the AT-cut units. However, smaller unit-to-unit gamma vector magnitude variation was exhibited by the AT-cut resonators. Oscillator sustaining stage vibration sensitivity was characterized by an equivalent open-loop phase modulation of 10(-6) rad/g.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Liakhovetskiĭ, V A; Bobrova, E V; Skopin, G N
2012-01-01
Transposition errors during the reproduction of a hand movement sequence make it possible to receive important information on the internal representation of this sequence in the motor working memory. Analysis of such errors showed that learning to reproduce sequences of the left-hand movements improves the system of positional coding (coding ofpositions), while learning of the right-hand movements improves the system of vector coding (coding of movements). Learning of the right-hand movements after the left-hand performance involved the system of positional coding "imposed" by the left hand. Learning of the left-hand movements after the right-hand performance activated the system of vector coding. Transposition errors during learning to reproduce movement sequences can be explained by neural network using either vector coding or both vector and positional coding.
Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W
2015-08-01
This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1976-01-01
A computer algorithm for extracting a quaternion from a direction-cosine matrix (DCM) is described. The quaternion provides a four-parameter representation of rotation, as against the nine-parameter representation afforded by a DCM. Commanded attitude in space shuttle steering is conveniently computed by DCM, while actual attitude is computed most compactly as a quaternion, as is attitude error. The unit length of the rotation quaternion, and interchangeable of a quaternion and its negative, are used to advantage in the extraction algorithm. Protection of the algorithm against square root failure and division overflow are considered. Necessary and sufficient conditions for handling the rotation vector element of largest magnitude are discussed
NASA Astrophysics Data System (ADS)
Hagen, C.; Ellmeier, M.; Piris, J.; Lammegger, R.; Jernej, I.; Magnes, W.; Murphy, E.; Pollinger, A.; Erd, C.; Baumjohann, W.
2017-11-01
Scalar magnetometers measure the magnitude of the magnetic field, while vector magnetometers (mostly fluxgate magnetometers) produce three-component outputs proportional to the magnitude and the direction of the magnetic field. While scalar magnetometers have a high accuracy, vector magnetometers suffer from parameter drifts and need to be calibrated during flight. In some cases, full science return can only be achieved by a combination of vector and scalar magnetometers.
Wang, Tsung-Jen; Lin, Yu-Huang; Chang, David C-K; Chou, Hsiu-Chu; Wang, I-Jong
2012-04-01
To analyse the magnitude of cylindrical corrections over which cyclotorsion compensation with iris recognition (IR) technology is beneficial during wavefront laser-assisted in situ keratomileusis. A retrospectively comparative case series. Fifty-four eyes that underwent wavefront laser-assisted in situ keratomileusis without IR (non-IR group) and 53 eyes that underwent wavefront laser-assisted in situ keratomileusis with IR (IR group) were recruited. Subgroup analysis based on baseline astigmatism were: a low degree of astigmatism (≥1.00 D to <2.00 D), a moderate degree of astigmatism (≥2.00 D to <3.00 D) and a high degree of astigmatism (≥3.00 D). Vector and non-vector analyses were used for comparison. The mean cylinder was -1.89 ± 0.76 D in the non-IR group and -2.00 ± 0.77 D in the IR group. Postoperatively, 38 eyes (74.50%) in the IR group and 31 eyes (57.50%) in the non-IR group were within ± 0.50 D of the target induced astigmatism vector (P = 0.063). The difference vector was 0.49 ± 0.28 in the IR group and 0.63 ± 0.40 in the non-IR group (P = 0.031). In the analysis of subgroups, the magnitude of error was significantly lower in the moderate IR subgroup than that of the moderate non-IR subgroup (P = 0.034). Furthermore, the moderate IR subgroup had a lower mean difference vector (P = 0.0078) and a greater surgically induced astigmatism (P = 0.036) than those of the moderate non-IR group. Wavefront laser-assisted in situ keratomileusis for the treatment of astigmatism using IR technology was effective and accurate for the treatment of myopic astigmatism. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.
Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch
ERIC Educational Resources Information Center
Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik
2011-01-01
Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…
An affine projection algorithm using grouping selection of input vectors
NASA Astrophysics Data System (ADS)
Shin, JaeWook; Kong, NamWoong; Park, PooGyeon
2011-10-01
This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
Liu, Xiayi; Yao, Jiafeng; Zhao, Tong; Obara, Hiromichi; Cui, Yahui; Takei, Masahiro
2018-06-01
Contact impedance has an important effect on micro electrical impedance tomography (EIT) sensors compared to conventional macro sensors. In the present work, a complex contact impedance effect ratio ξ is defined to quantitatively evaluate the effect of the contact impedance on the accuracy of the reconstructed images by micro EIT. Quality of the reconstructed image under various ξ is estimated by the phantom simulation to find the optimum algorithm. The generalized vector sampled pattern matching (GVSPM) method reveals the best image quality and the best tolerance to ξ. Moreover, the images of yeast cells sedimentary distribution in a multilayered microchannel are reconstructed by the GVSPM method under various mean magnitudes of contact impedance effect ratio |ξ|. The result shows that the best image quality that has the smallest voltage error U E = 0.581 is achieved with measurement frequency f = 1 MHz and mean magnitude |ξ| = 26. In addition, the reconstructed images of cells distribution become improper while f < 10 kHz and mean value of |ξ| > 2400.
Wheeler, J; Mariani, E; Piazolo, S; Prior, D J; Trimby, P; Drury, M R
2009-03-01
The Weighted Burgers Vector (WBV) is defined here as the sum, over all types of dislocations, of [(density of intersections of dislocation lines with a map) x (Burgers vector)]. Here we show that it can be calculated, for any crystal system, solely from orientation gradients in a map view, unlike the full dislocation density tensor, which requires gradients in the third dimension. No assumption is made about gradients in the third dimension and they may be non-zero. The only assumption involved is that elastic strains are small so the lattice distortion is entirely due to dislocations. Orientation gradients can be estimated from gridded orientation measurements obtained by EBSD mapping, so the WBV can be calculated as a vector field on an EBSD map. The magnitude of the WBV gives a lower bound on the magnitude of the dislocation density tensor when that magnitude is defined in a coordinate invariant way. The direction of the WBV can constrain the types of Burgers vectors of geometrically necessary dislocations present in the microstructure, most clearly when it is broken down in terms of lattice vectors. The WBV has three advantages over other measures of local lattice distortion: it is a vector and hence carries more information than a scalar quantity, it has an explicit mathematical link to the individual Burgers vectors of dislocations and, since it is derived via tensor calculus, it is not dependent on the map coordinate system. If a sub-grain wall is included in the WBV calculation, the magnitude of the WBV becomes dependent on the step size but its direction still carries information on the Burgers vectors in the wall. The net Burgers vector content of dislocations intersecting an area of a map can be simply calculated by an integration round the edge of that area, a method which is fast and complements point-by-point WBV calculations.
A median filter approach for correcting errors in a vector field
NASA Technical Reports Server (NTRS)
Schultz, H.
1985-01-01
Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
NASA Astrophysics Data System (ADS)
Mandal, Gour Chandra; Mukherjee, Rahul; Das, Binoy; Patra, Ardhendu Sekhar
2018-03-01
An innovative low cost reflective semiconductor amplifier (RSOA) based bidirectional Triple-play services (TPS) using wavelength division multiplexed radio on free-space-optics passive optical network (WDM-RoFSO-PON) is proposed and experimentally demonstrated to transmit data, voice and video services simultaneously. In this paper, the TPS (10 Gb/s data/voice and 1.49 Gb/s HDTV signal) are successfully transmitted over a 500 m free-space link in downstream and RSOA is utilized at the receiving site to broadcast 1.25 Gb/s data/voice signal over same free-space link in upstream by reusing the carrier, that makes the system cost-effective. High receiver sensitivity and signal-to-noise ratio (SNR), low bit-error-rate (BER) and low error vector magnitude (EVM), and excellent eye-diagrams in our proposed network build the system more reliable and stable with acceptable performance. Therefore, proposed WDM-RoFSO-PON could be the viable solution for future ubiquitous multiservice wireless network in the scenario of TPS.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
NASA Astrophysics Data System (ADS)
Baek, Jong Geun; Jang, Hyun Soo; Oh, Young Kee; Lee, Hyun Jeong; Kim, Eng Chan
2015-07-01
The purpose of this study was to evaluate the setup uncertainties for single-fraction stereotactic radiosurgery (SF-SRS) based on clinical data with two different mask-creation methods using pretreatment con-beam computed tomography imaging guidance. Dedicated frameless fixation Brain- LAB masks for 23 patients were created as a routine mask (R-mask) making method, as explained in the BrainLAB's user manual. Alternative masks (A-masks), which were created by modifying the cover range of the R-masks for the patient's head, were used for 23 patients. The systematic errors including these for each mask and stereotactic target localizer were analyzed, and the errors were calculated as the means ± standard deviations (SD) from the left-right (LR), superior-inferior (SI), anterior-posterior (AP), and yaw setup corrections. In addition, the frequencies of the threedimensional (3D) vector length were analyzed. The values of the mean setup corrections for the R-mask in all directions were < 0.7 mm and < 0.1°, whereas the magnitudes of the SDs were relatively large compared to the mean values. In contrast, the means and SDs of the A-mask were smaller than those for the R-mask with the exception of the SD in the AP direction. The means and SDs in the yaw rotational direction for the R-mask and the A-mask system were comparable. 3D vector shifts of larger magnitude occurred more frequently for the R-mask than the A-mask. The setup uncertainties for each mask with the stereotactic localizing system had an asymmetric offset towards the positive AP direction. The A-mask-creation method, which is capable of covering the top of the patient's head, is superior to that for the R-mask, so the use of the A-mask is encouraged for SF-SRS to reduce the setup uncertainties. Moreover, careful mask-making is required to prevent possible setup uncertainties.
NASA Astrophysics Data System (ADS)
Reddy, Ramakrushna; Nair, Rajesh R.
2013-10-01
This work deals with a methodology applied to seismic early warning systems which are designed to provide real-time estimation of the magnitude of an event. We will reappraise the work of Simons et al. (2006), who on the basis of wavelet approach predicted a magnitude error of ±1. We will verify and improve upon the methodology of Simons et al. (2006) by applying an SVM statistical learning machine on the time-scale wavelet decomposition methods. We used the data of 108 events in central Japan with magnitude ranging from 3 to 7.4 recorded at KiK-net network stations, for a source-receiver distance of up to 150 km during the period 1998-2011. We applied a wavelet transform on the seismogram data and calculating scale-dependent threshold wavelet coefficients. These coefficients were then classified into low magnitude and high magnitude events by constructing a maximum margin hyperplane between the two classes, which forms the essence of SVMs. Further, the classified events from both the classes were picked up and linear regressions were plotted to determine the relationship between wavelet coefficient magnitude and earthquake magnitude, which in turn helped us to estimate the earthquake magnitude of an event given its threshold wavelet coefficient. At wavelet scale number 7, we predicted the earthquake magnitude of an event within 2.7 seconds. This means that a magnitude determination is available within 2.7 s after the initial onset of the P-wave. These results shed light on the application of SVM as a way to choose the optimal regression function to estimate the magnitude from a few seconds of an incoming seismogram. This would improve the approaches from Simons et al. (2006) which use an average of the two regression functions to estimate the magnitude.
Wind and Temperature Spectrometry of the Upper Atmosphere in Low-Earth Orbit
NASA Technical Reports Server (NTRS)
Herrero, Federico
2011-01-01
Wind and Temperature Spectrometry (WATS) is a new approach to measure the full wind vector, temperature, and relative densities of major neutral species in the Earth's thermosphere. The method uses an energy-angle spectrometer moving through the tenuous upper atmosphere to measure directly the angular and energy distributions of the air stream that enters the spectrometer. The angular distribution gives the direction of the total velocity of the air entering the spectrometer, and the energy distribution gives the magnitude of the total velocity. The wind velocity vector is uniquely determined since the measured total velocity depends on the wind vector and the orbiting velocity vector. The orbiting spectrometer moves supersonically, Mach 8 or greater, through the air and must point within a few degrees of its orbital velocity vector (the ram direction). Pointing knowledge is critical; for example, pointing errors 0.1 lead to errors of about 10 m/s in the wind. The WATS method may also be applied without modification to measure the ion-drift vector, ion temperature, and relative ion densities of major ionic species in the ionosphere. In such an application it may be called IDTS: Ion-Drift Temperature Spectrometry. A spectrometer-based coordinate system with one axis instantaneously pointing along the ram direction makes it possible to transform the Maxwellian velocity distribution of the air molecules to a Maxwellian energy-angle distribution for the molecular flux entering the spectrometer. This implementation of WATS is called the gas kinetic method (GKM) because it is applied to the case of the Maxwellian distribution. The WATS method follows from the recognition that in a supersonic platform moving at 8,000 m/s, the measurement of small wind velocities in the air on the order of a few 100 m/s and less requires precise knowledge of the angle of incidence of the neutral atoms and molecules. The same is true for the case of ion-drift measurements. WATS also provides a general approach that can obtain non-equilibrium distributions as may exist in the upper regions of the thermosphere, above 500 km and into the exosphere. Finally, WATS serves as a mass spectrometer, with very low mass resolution of roughly 1 part in 3, but easily separating atomic oxygen from molecular nitrogen.
Application of Bred Vectors To Data Assimilation
NASA Astrophysics Data System (ADS)
Corazza, M.; Kalnay, E.; Patil, Dj
We introduced a statistic, the BV-dimension, to measure the effective local finite-time dimensionality of the atmosphere. We show that this dimension is often quite low, and suggest that this finding has important implications for data assimilation and the accuracy of weather forecasting (Patil et al, 2001). The original database for this study was the forecasts of the NCEP global ensemble forecasting system. The initial differences between the control forecast and the per- turbed forecasts are called bred vectors. The control and perturbed initial conditions valid at time t=n(t are evolved using the forecast model until time t=(n+1) (t. The differences between the perturbed and the control forecasts are scaled down to their initial amplitude, and constitute the bred vectors valid at (n+1) (t. Their growth rate is typically about 1.5/day. The bred vectors are similar by construction to leading Lya- punov vectors except that they have small but finite amplitude, and they are valid at finite times. The original NCEP ensemble data set has 5 independent bred vectors. We define a local bred vector at each grid point by choosing the 5 by 5 grid points centered at the grid point (a region of about 1100km by 1100km), and using the north-south and east- west velocity components at 500mb pressure level to form a 50 dimensional column vector. Since we have k=5 global bred vectors, we also have k local bred vectors at each grid point. We estimate the effective dimensionality of the subspace spanned by the local bred vectors by performing a singular value decomposition (EOF analysis). The k local bred vector columns form a 50xk matrix M. The singular values s(i) of M measure the extent to which the k column unit vectors making up the matrix M point in the direction of v(i). We define the bred vector dimension as BVDIM={Sum[s(i)]}^2/{Sum[s(i)]^2} For example, if 4 out of the 5 vectors lie along v, and one lies along v, the BV- dimension would be BVDIM[sqrt(4), 1, 0,0,0]=1.8, less than 2 because one direction is more dominant than the other in representing the original data. The results (Patil et al, 2001) show that there are large regions where the bred vectors span a subspace of substantially lower dimension than that of the full space. These low dimensionality regions are dominant in the baroclinic extratropics, typically have a lifetime of 3-7 days, have a well-defined horizontal and vertical structure that spans 1 most of the atmosphere, and tend to move eastward. New results with a large number of ensemble members confirm these results and indicate that the low dimensionality regions are quite robust, and depend only on the verification time (i.e., the underlying flow). Corazza et al (2001) have performed experiments with a data assimilation system based on a quasi-geostrophic model and simulated observations (Morss, 1999, Hamill et al, 2000). A 3D-variational data assimilation scheme for a quasi-geostrophic chan- nel model is used to study the structure of the background error and its relationship to the corresponding bred vectors. The "true" evolution of the model atmosphere is defined by an integration of the model and "rawinsonde observations" are simulated by randomly perturbing the true state at fixed locations. It is found that after 3-5 days the bred vectors develop well organized structures which are very similar for the two different norms considered in this paper (potential vorticity norm and streamfunction norm). The results show that the bred vectors do indeed represent well the characteristics of the data assimilation forecast errors, and that the subspace of bred vectors contains most of the forecast error, except in areas where the forecast errors are small. For example, the angle between the 6hr forecast error and the subspace spanned by 10 bred vectors is less than 10o over 90% of the domain, indicating a pattern correlation of more than 98.5% between the forecast error and its projection onto the bred vector subspace. The presence of low-dimensional regions in the perturbations of the basic flow has important implications for data assimilation. At any given time, there is a difference between the true atmospheric state and the model forecast. Assuming that model er- rors are not the dominant source of errors, in a region of low BV-dimensionality the difference between the true state and the forecast should lie substantially in the low dimensional unstable subspace of the few bred vectors that contribute most strongly to the low BV-dimension. This information should yield a substantial improvement in the forecast: the data assimilation algorithm should correct the model state by moving it closer to the observations along the unstable subspace, since this is where the true state most likely lies. Preliminary experiments have been conducted with the quasi-geostrophic data assim- ilation system testing whether it is possible to add "errors of the day" based on bred vectors to the standard (constant) 3D-Var background error covariance in order to capture these important errors. The results are extremely encouraging, indicating a significant reduction (about 40%) in the analysis errors at a very low computational cost. References: 2 Corazza, M., E. Kalnay, DJ Patil, R. Morss, M Cai, I. Szunyogh, BR Hunt, E Ott and JA Yorke, 2001: Use of the breeding technique to estimate the structure of the analysis "errors of the day". Submitted to Nonlinear Processes in Geophysics. Hamill, T.M., Snyder, C., and Morss, R.E., 2000: A Comparison of Probabilistic Fore- casts from Bred, Singular-Vector and Perturbed Observation Ensembles, Mon. Wea. Rev., 128, 18351851. Kalnay, E., and Z. Toth, 1994: Removing growing errors in the analysis cycle. Preprints of the Tenth Conference on Numerical Weather Prediction, Amer. Meteor. Soc., 1994, 212-215. Morss, R. E., 1999: Adaptive observations: Idealized sampling strategies for improv- ing numerical weather prediction. PHD thesis, Massachussetts Institute of technology, 225pp. Patil, D. J. S., B. R. Hunt, E. Kalnay, J. A. Yorke, and E. Ott., 2001: Local Low Dimensionality of Atmospheric Dynamics. Phys. Rev. Lett., 86, 5878. 3
Design of thrust vectoring exhaust nozzles for real-time applications using neural networks
NASA Technical Reports Server (NTRS)
Prasanth, Ravi K.; Markin, Robert E.; Whitaker, Kevin W.
1991-01-01
Thrust vectoring continues to be an important issue in military aircraft system designs. A recently developed concept of vectoring aircraft thrust makes use of flexible exhaust nozzles. Subtle modifications in the nozzle wall contours produce a non-uniform flow field containing a complex pattern of shock and expansion waves. The end result, due to the asymmetric velocity and pressure distributions, is vectored thrust. Specification of the nozzle contours required for a desired thrust vector angle (an inverse design problem) has been achieved with genetic algorithms. This approach is computationally intensive and prevents the nozzles from being designed in real-time, which is necessary for an operational aircraft system. An investigation was conducted into using genetic algorithms to train a neural network in an attempt to obtain, in real-time, two-dimensional nozzle contours. Results show that genetic algorithm trained neural networks provide a viable, real-time alternative for designing thrust vectoring nozzles contours. Thrust vector angles up to 20 deg were obtained within an average error of 0.0914 deg. The error surfaces encountered were highly degenerate and thus the robustness of genetic algorithms was well suited for minimizing global errors.
Model assessment using a multi-metric ranking technique
NASA Astrophysics Data System (ADS)
Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.
2017-12-01
Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.
2011-12-01
encoded as a 64-bit integer number theta_2massd Distance in arcsec from the 2MASS source J 2MASS J-band magnitude JErr 2MASS J-band magnitude error H... 2MASS H-band magnitude HErr 2MASS H-band magnitude error K 2MASS K-band magnitude KErr 2MASS K-band magnitude error jh 2MASS J−H color (corrected for...extinction, j − h = (J − 0.327rExt) − (H − 0.209rExt)) hk 2MASS H−K color (corrected for extinction, h− k = (H − 0.209rExt) − (K − 0.133rExt)) jk
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Arrows as anchors: An analysis of the material features of electric field vector arrows
NASA Astrophysics Data System (ADS)
Gire, Elizabeth; Price, Edward
2014-12-01
Representations in physics possess both physical and conceptual aspects that are fundamentally intertwined and can interact to support or hinder sense making and computation. We use distributed cognition and the theory of conceptual blending with material anchors to interpret the roles of conceptual and material features of representations in students' use of representations for computation. We focus on the vector-arrows representation of electric fields and describe this representation as a conceptual blend of electric field concepts, physical space, and the material features of the representation (i.e., the physical writing and the surface upon which it is drawn). In this representation, spatial extent (e.g., distance on paper) is used to represent both distances in coordinate space and magnitudes of electric field vectors. In conceptual blending theory, this conflation is described as a clash between the input spaces in the blend. We explore the benefits and drawbacks of this clash, as well as other features of this representation. This analysis is illustrated with examples from clinical problem-solving interviews with upper-division physics majors. We see that while these intermediate physics students make a variety of errors using this representation, they also use the geometric features of the representation to add electric field contributions and to organize the problem situation productively.
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-01-01
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-12-18
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.
Illés, Tamás; Somoskeöy, Szabolcs
2013-06-01
A new concept of vertebra vectors based on spinal three-dimensional (3D) reconstructions of images from the EOS system, a new low-dose X-ray imaging device, was recently proposed to facilitate interpretation of EOS 3D data, especially with regard to horizontal plane images. This retrospective study was aimed at the evaluation of the spinal layout visualized by EOS 3D and vertebra vectors before and after surgical correction, the comparison of scoliotic spine measurement values based on 3D vertebra vectors with measurements using conventional two-dimensional (2D) methods, and an evaluation of horizontal plane vector parameters for their relationship with the magnitude of scoliotic deformity. 95 patients with adolescent idiopathic scoliosis operated according to the Cotrel-Dubousset principle were subjected to EOS X-ray examinations pre- and postoperatively, followed by 3D reconstructions and generation of vertebra vectors in a calibrated coordinate system to calculate vector coordinates and parameters, as published earlier. Differences in values of conventional 2D Cobb methods and methods based on vertebra vectors were evaluated by means comparison T test and relationship of corresponding parameters was analysed by bivariate correlation. Relationship of horizontal plane vector parameters with the magnitude of scoliotic deformities and results of surgical correction were analysed by Pearson correlation and linear regression. In comparison to manual 2D methods, a very close relationship was detectable in vertebra vector-based curvature data for coronal curves (preop r 0.950, postop r 0.935) and thoracic kyphosis (preop r 0.893, postop r 0.896), while the found small difference in L1-L5 lordosis values (preop r 0.763, postop r 0.809) was shown to be strongly related to the magnitude of corresponding L5 wedge. The correlation analysis results revealed strong correlation between the magnitude of scoliosis and the lateral translation of apical vertebra in horizontal plane. The horizontal plane coordinates of the terminal and initial points of apical vertebra vectors represent this (r 0.701; r 0.667). Less strong correlation was detected in the axial rotation of apical vertebras and the magnitudes of the frontal curves (r 0.459). Vertebra vectors provide a key opportunity to visualize spinal deformities in all three planes simultaneously. Measurement methods based on vertebral vectors proved to be just as accurate and reliable as conventional measurement methods for coronal and sagittal plane parameters. In addition, the horizontal plane display of the curves can be studied using the same vertebra vectors. Based on the vertebra vectors data, during the surgical treatment of spinal deformities, the diminution of the lateral translation of the vertebras seems to be more important in the results of the surgical correction than the correction of the axial rotation.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji; Sano, Kousuke
This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.
Characterization of dual-polarization LTE radio over a free-space optical turbulence channel.
Bohata, J; Zvanovec, S; Korinek, T; Mansour Abadi, M; Ghassemlooy, Z
2015-08-10
A dual polarization (DP) radio over a free-space optical (FSO) communication link using a long-term evolution (LTE) radio signal is proposed and analyzed under different turbulence channel conditions. Radio signal transmission over the DP FSO channel is experimentally verified by means of error vector magnitude (EVM) statistics. We demonstrate that such a system, employing a 64 quadrature amplitude modulation at the frequency bands of 800 MHz and 2.6 GHz, evinces reliability with <8% of EVM in a turbulent channel. Based on the results, we show that transmitting the LTE signal over the FSO channel is a potential solution for last-mile access or backbone networks, when using multiple-input multiple-output based DP signals.
Numerical modelling of multimode fibre-optic communication lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sidelnikov, O S; Fedoruk, M P; Sygletos, S
The results of numerical modelling of nonlinear propagation of an optical signal in multimode fibres with a small differential group delay are presented. It is found that the dependence of the error vector magnitude (EVM) on the differential group delay can be reduced by increasing the number of ADC samples per symbol in the numerical implementation of the differential group delay compensation algorithm in the receiver. The possibility of using multimode fibres with a small differential group delay for data transmission in modern digital communication systems is demonstrated. It is shown that with increasing number of modes the strong couplingmore » regime provides a lower EVM level than the weak coupling one. (fibre-optic communication lines)« less
King, Paul E [Corvallis, OR; Woodside, Charles Rigel [Corvallis, OR
2012-02-07
The disclosure herein provides an apparatus for location of a quantity of current vectors in an electrical device, where the current vector has a known direction and a known relative magnitude to an input current supplied to the electrical device. Mathematical constants used in Biot-Savart superposition equations are determined for the electrical device, the orientation of the apparatus, and relative magnitude of the current vector and the input current, and the apparatus utilizes magnetic field sensors oriented to a sensing plane to provide current vector location based on the solution of the Biot-Savart superposition equations. Description of required orientations between the apparatus and the electrical device are disclosed and various methods of determining the mathematical constants are presented.
Constrained motion estimation-based error resilient coding for HEVC
NASA Astrophysics Data System (ADS)
Guo, Weihan; Zhang, Yongfei; Li, Bo
2018-04-01
Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.
Perisic, Milun; Kinoshita, Michael H; Ranson, Ray M; Gallegos-Lopez, Gabriel
2014-06-03
Methods, system and apparatus are provided for controlling third harmonic voltages when operating a multi-phase machine in an overmodulation region. The multi-phase machine can be, for example, a five-phase machine in a vector controlled motor drive system that includes a five-phase PWM controlled inverter module that drives the five-phase machine. Techniques for overmodulating a reference voltage vector are provided. For example, when the reference voltage vector is determined to be within the overmodulation region, an angle of the reference voltage vector can be modified to generate a reference voltage overmodulation control angle, and a magnitude of the reference voltage vector can be modified, based on the reference voltage overmodulation control angle, to generate a modified magnitude of the reference voltage vector. By modifying the reference voltage vector, voltage command signals that control a five-phase inverter module can be optimized to increase output voltages generated by the five-phase inverter module.
Simon, Steven L; Hoffman, F Owen; Hofer, Eduard
2015-01-01
Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.
The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Design of analytical failure detection using secondary observers
NASA Technical Reports Server (NTRS)
Sisar, M.
1982-01-01
The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.
Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
The effect of saccade metrics on the corollary discharge contribution to perceived eye location
Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.
2015-01-01
Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955
Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeo, U. J.; Supple, J. R.; Franich, R. D.
2013-10-15
Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L.more » Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7.5 mm across algorithms for scenarios I to III, respectively. The greatest accuracy was exhibited by the original Horn and Schunck optical flow algorithm. In this case, for scenario III (erased FMs not contributing to driving the DIR calculation), the mean error was half that of the modified demons algorithm (which exhibited the greatest error), across all deformations. Some algorithms failed to reproduce the geometry at all, while others accurately deformed high contrast features but not low-contrast regions—indicating poor interpolation between landmarks.Conclusions: The accuracy of DIR algorithms was quantitatively evaluated using a tissue equivalent, mass, and density conserving DEFGEL phantom. For the model studied, optical flow algorithms performed better than demons algorithms, with the original Horn and Schunck performing best. The degree of error is influenced more by the magnitude of displacement than the geometric complexity of the deformation. As might be expected, deformation is estimated less accurately for low-contrast regions than for high-contrast features, and the method presented here allows quantitative analysis of the differences. The evaluation of registration accuracy through observation of the same high contrast features that drive the DIR calculation is shown to be circular and hence misleading.« less
A new method for distortion magnetic field compensation of a geomagnetic vector measurement system
NASA Astrophysics Data System (ADS)
Liu, Zhongyan; Pan, Mengchun; Tang, Ying; Zhang, Qi; Geng, Yunling; Wan, Chengbiao; Chen, Dixiang; Tian, Wugang
2016-12-01
The geomagnetic vector measurement system mainly consists of three-axis magnetometer and an INS (inertial navigation system), which have many ferromagnetic parts on them. The magnetometer is always distorted by ferromagnetic parts and other electric equipments such as INS and power circuit module within the system, which can lead to geomagnetic vector measurement error of thousands of nT. Thus, the geomagnetic vector measurement system has to be compensated in order to guarantee the measurement accuracy. In this paper, a new distortion magnetic field compensation method is proposed, in which a permanent magnet with different relative positions is used to change the ambient magnetic field to construct equations of the error model parameters, and the parameters can be accurately estimated by solving linear equations. In order to verify effectiveness of the proposed method, the experiment is conducted, and the results demonstrate that, after compensation, the components errors of measured geomagnetic field are reduced significantly. It demonstrates that the proposed method can effectively improve the accuracy of the geomagnetic vector measurement system.
Sex Differences and Neurodevelopmental Variables: A Vector Model
ERIC Educational Resources Information Center
Languis, Marlin; Naour, Paul
For the individual, gender difference falls along the feminine-masculine continuum with strong neurodevelopmental influences at various points throughout the lifespan. Neurodevelopmental influences are conceptualized in a vector model of sex difference. Vector attributes, direction and magnitude, are influenced initially by differences in levels…
NASA Technical Reports Server (NTRS)
Walker, H. F.
1979-01-01
In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.
Noise level and MPEG-2 encoder statistics
NASA Astrophysics Data System (ADS)
Lee, Jungwoo
1997-01-01
Most software in the movie and broadcasting industries are still in analog film or tape format, which typically contains random noise that originated from film, CCD camera, and tape recording. The performance of the MPEG-2 encoder may be significantly degraded by the noise. It is also affected by the scene type that includes spatial and temporal activity. The statistical property of noise originating from camera and tape player is analyzed and the models for the two types of noise are developed. The relationship between the noise, the scene type, and encoder statistics of a number of MPEG-2 parameters such as motion vector magnitude, prediction error, and quant scale are discussed. This analysis is intended to be a tool for designing robust MPEG encoding algorithms such as preprocessing and rate control.
Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.
2011-01-01
Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724
Are Bred Vectors The Same As Lyapunov Vectors?
NASA Astrophysics Data System (ADS)
Kalnay, E.; Corazza, M.; Cai, M.
Regional loss of predictability is an indication of the instability of the underlying flow, where small errors in the initial conditions (or imperfections in the model) grow to large amplitudes in finite times. The stability properties of evolving flows have been studied using Lyapunov vectors (e.g., Alligood et al, 1996, Ott, 1993, Kalnay, 2002), singular vectors (e.g., Lorenz, 1965, Farrell, 1988, Molteni and Palmer, 1993), and, more recently, with bred vectors (e.g., Szunyogh et al, 1997, Cai et al, 2001). Bred vectors (BVs) are, by construction, closely related to Lyapunov vectors (LVs). In fact, after an infinitely long breeding time, and with the use of infinitesimal ampli- tudes, bred vectors are identical to leading Lyapunov vectors. In practical applications, however, bred vectors are different from Lyapunov vectors in two important ways: a) bred vectors are never globally orthogonalized and are intrinsically local in space and time, and b) they are finite-amplitude, finite-time vectors. These two differences are very significant in a dynamical system whose size is very large. For example, the at- mosphere is large enough to have "room" for several synoptic scale instabilities (e.g., storms) to develop independently in different regions (say, North America and Aus- tralia), and it is complex enough to have several different possible types of instabilities (such as barotropic, baroclinic, convective, and even Brownian motion). Bred vectors share some of their properties with leading LVs (Corazza et al, 2001a, 2001b, Toth and Kalnay, 1993, 1997, Cai et al, 2001). For example, 1) Bred vectors are independent of the norm used to define the size of the perturba- tion. Corazza et al. (2001) showed that bred vectors obtained using a potential enstro- phy norm were indistinguishable from bred vectors obtained using a streamfunction squared norm, in contrast with singular vectors. 2) Bred vectors are independent of the length of the rescaling period as long as the perturbations remain approximately linear (for example, for atmospheric models the interval for rescaling could be varied between a single time step and 1 day without affecting qualitatively the characteristics of the bred vectors. However, the finite-amplitude, finite-time, and lack of orthogonalization of the BVs introduces important differences with LVs: 1) In regions that undergo strong instabilities, the bred vectors tend to be locally domi- 1 nated by simple, low-dimensional structures. Patil et al (2001) showed that the BV-dim (appendix) gives a good estimate of the number of dominant directions (shapes) of the local k bred vectors. For example, if half of them are aligned in one direction, and half in a different direction, the BV-dim is about two. If the majority of the bred vectors are aligned predominantly in one direction and only a few are aligned in a second direction, then the BV-dim is between 1 and 2. Patil et al., (2001) showed that the regions with low dimensionality cover about 20% of the atmosphere. They also found that these low-dimensionality regions have a very well defined vertical structure, and a typical lifetime of 3-7 days. The low dimensionality identifies regions where the in- stability of the basic flow has manifested itself in a low number of preferred directions of perturbation growth. 2) Using a Quasi-Geostrophic simulation system of data assimilation developed by Morss (1999), Corazza et al (2001a, b) found that bred vectors have structures that closely resemble the background (short forecasts used as first guess) errors, which in turn dominate the local analysis errors. This is especially true in regions of low dimensionality, which is not surprising if these are unstable regions where errors grow in preferred shapes. 3) The number of bred vectors needed to represent the unstable subspace in the QG system is small (about 6-10). This was shown by computing the local BV-dim as a function of the number of independent bred vectors. Convergence in the local dimen- sion starts to occur at about 6 BVs, and is essentially complete when the number of vectors is about 10-15 (Corazza et al, 2001a). This should be contrasted with the re- sults of Snyder and Joly (1998) and Palmer et al (1998) who showed that hundreds of Lyapunov vectors with positive Lyapunov exponents are needed to represent the attractor of the system in quasi-geostrophic models. 4) Since only a few bred vectors are needed, and background errors project strongly in the subspace of bred vectors, Corazza et al (2001b) were able to develop cost-efficient methods to improve the 3D-Var data assimilation by adding to the background error covariance terms proportional to the outer product of the bred vectors, thus represent- ing the "errors of the day". This approach led to a reduction of analysis error variance of about 40% at very low cost. 5) The fact that BVs have finite amplitude provides a natural way to filter out instabil- ities present in the system that have fast growth, but saturate nonlinearly at such small amplitudes that they are irrelevant for ensemble perturbations. As shown by Lorenz (1996) Lyapunov vectors (and singular vectors) of models including these physical phenomena would be dominated by the fast but small amplitude instabilities, unless they are explicitly excluded from the linearized models. Bred vectors, on the other 2 hand, through the choice of an appropriate size for the perturbation, provide a natural filter based on nonlinear saturation of fast but irrelevant instabilities. 6) Every bred vector is qualitatively similar to the *leading* LV. LVs beyond the leading LV are obtained by orthogonalization after each time step with respect to the previous LVs subspace. The orthogonalization requires the introduction of a norm. With an enstrophy norm, the successive LVs have larger and larger horizontal scales, and a choice of a stream function norm would lead to successively smaller scales in the LVs. Beyond the first few LVs, there is little qualitative similarity between the background errors and the LVs. In summary, in a system like the atmosphere with enough physical space for several independent local instabilities, BVs and LVs share some properties but they also have significant differences. BV are finite-amplitude, finite-time, and because they are not globally orthogonalized, they have local properties in space. Bred vectors are akin to the leading LV, but bred vectors derived from different arbitrary initial perturba- tions remain distinct from each other, instead of collapsing into a single leading vec- tor, presumably because the nonlinear terms and physical parameterizations introduce sufficient stochastic forcing to avoid such convergence. As a result, there is no need for global orthogonalization, and the number of bred vectors required to describe the natural instabilities in an atmospheric system (from a local point of view) is much smaller than the number of Lyapunov vectors with positive Lyapunov exponents. The BVs are independent of the norm, whereas the LVs beyond the first one do depend on the choice of norm: for example, they become larger in scale with a vorticity norm, and smaller with a stream function norm. These properties of BVs result in significant advantages for data assimilation and en- semble forecasting for the atmosphere. Errors in the analysis have structures very similar to bred vectors, and it is found that they project very strongly on the subspace of a few bred vectors. This is not true for either Lyapunov vectors beyond the lead- ing LVs, or for singular vectors unless they are constructed with a norm based on the analysis error covariance matrix (or a bred vector covariance). The similarity between bred vectors and analysis errors leads to the ability to include "errors of the day" in the background error covariance and a significant improvement of the analysis beyond 3D-Var at a very low cost (Corazza, 2001b). References Alligood K. T., T. D. Sauer and J. A. Yorke, 1996: Chaos: an introduction to dynamical systems. Springer-Verlag, New York. Buizza R., J. Tribbia, F. Molteni and T. Palmer, 1993: Computation of optimal unstable 3 structures for numerical weather prediction models. Tellus, 45A, 388-407. Cai, M., E. Kalnay and Z. Toth, 2001: Potential impact of bred vectors on ensemble forecasting and data assimilation in the Zebiak-Cane model. Submitted to J of Climate. Corazza, M., E. Kalnay, D. J. Patil, R. Morss, M. Cai, I. Szunyogh, B. R. Hunt, E. Ott and J. Yorke, 2001: Use of the breeding technique to determine the structure of the "errors of the day". Submitted to Nonlinear Processes in Geophysics. Corazza, M., E. Kalnay, DJ Patil, E. Ott, J. Yorke, I Szunyogh and M. Cai, 2001: Use of the breeding technique in the estimation of the background error covariance matrix for a quasigeostrophic model. AMS Symposium on Observations, Data Assimilation and Predictability, Preprints volume, Orlando, FA, 14-17 January 2002. Farrell, B., 1988: Small error dynamics and the predictability of atmospheric flow, J. Atmos. Sciences, 45, 163-172. Kalnay, E 2002: Atmospheric modeling, data assimilation and predictability. Chapter 6. Cambridge University Press, UK. In press. Kalnay E and Z Toth 1994: Removing growing errors in the analysis. Preprints, Tenth Conference on Numerical Weather Prediction, pp 212-215. Amer. Meteor. Soc., July 18-22, 1994. Lorenz, E.N., 1965: A study of the predictability of a 28-variable atmospheric model. Tellus, 21, 289-307. Lorenz, E.N., 1996: Predictability- A problem partly solved. Proceedings of the ECMWF Seminar on Predictability, Reading, England, Vol. 1 1-18. Molteni F. and TN Palmer, 1993: Predictability and finite-time instability of the north- ern winter circulation. Q. J. Roy. Meteorol. Soc. 119, 269-298. Morss, R.E.: 1999: Adaptive observations: Idealized sampling strategies for improving numerical weather prediction. Ph.D. Thesis, Massachussetts Institute of Technology, 225pp. Ott, E., 1993: Chaos in Dynamical Systems. Cambridge University Press. New York. Palmer, TN, R. Gelaro, J. Barkmeijer and R. Buizza, 1998: Singular vectors, metrics and adaptive observations. J. Atmos Sciences, 55, 633-653. Patil, DJ, BR Hunt, E Kalnay, J. Yorke, and E. Ott, 2001: Local low dimensionality of atmospheric dynamics. Phys. Rev. Lett., 86, 5878. Patil, DJ, I. Szunyogh, BR Hunt, E Kalnay, E Ott, and J. Yorke, 2001: Using large 4 member ensembles to isolate local low dimensionality of atmospheric dynamics. AMS Symposium on Observations, Data Assimilation and Predictability, Preprints volume, Orlando, FA, 14-17 January 2002. Snyder, C. and A. Joly, 1998: Development of perturbations within growing baroclinic waves. Q. J. Roy. Meteor. Soc., 124, pp 1961. Szunyogh, I, E. Kalnay and Z. Toth, 1997: A comparison of Lyapunov and Singular vectors in a low resolution GCM. Tellus, 49A, 200-227. Toth, Z and E Kalnay 1993: Ensemble forecasting at NMC - the generation of pertur- bations. Bull. Amer. Meteorol. Soc., 74, 2317-2330. Toth, Z and E Kalnay 1997: Ensemble forecasting at NCEP and the breeding method. Mon Wea Rev, 125, 3297-3319. * Corresponding author address: Eugenia Kalnay, Meteorology Depart- ment, University of Maryland, College Park, MD 20742-2425, USA; email: ekalnay@atmos.umd.edu Appendix: BV-dimension Patil et al., (2001) defined local bred vectors around a point in the 3-dimensional grid of the model by taking the 24 closest horizontal neighbors. If there are k bred vectors available, and N model variables for each grid point, the k local bred vectors form the columns of a 25Nxk matrix B. The kxk covariance matrix is C=B^T B. Its eigen- values are positive, and its eigenvectors v(i) are the singular vectors of the local bred vector subspace. The Bred Vector dimension (BV-dim) measures the local effective dimension: BV-dim[s,s,...,s(k)]={SUM[s(i)]}^2/SUM[s(i)]^2 where s(i) are the square roots of the eigenvalues of the covariance matrix. 5
Estimation of chaotic coupled map lattices using symbolic vector dynamics
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya
2010-01-01
In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
NASA Technical Reports Server (NTRS)
Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.
1994-01-01
Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.
Ahmad, Ahmad F.; Abbas, Zulkifly; Obaiys, Suzan J.; Ibrahim, Norazowa; Hashim, Mansor; Khaleel, Haider
2015-01-01
Bio-composites of oil palm empty fruit bunch (OPEFB) fibres and polycaprolactones (PCL) with a thickness of 1 mm were prepared and characterized. The composites produced from these materials are low in density, inexpensive, environmentally friendly, and possess good dielectric characteristics. The magnitudes of the reflection and transmission coefficients of OPEFB fibre-reinforced PCL composites with different percentages of filler were measured using a rectangular waveguide in conjunction with a microwave vector network analyzer (VNA) in the X-band frequency range. In contrast to the effective medium theory, which states that polymer-based composites with a high dielectric constant can be obtained by doping a filler with a high dielectric constant into a host material with a low dielectric constant, this paper demonstrates that the use of a low filler percentage (12.2%OPEFB) and a high matrix percentage (87.8%PCL) provides excellent results for the dielectric constant and loss factor, whereas 63.8% filler material with 36.2% host material results in lower values for both the dielectric constant and loss factor. The open-ended probe technique (OEC), connected with the Agilent vector network analyzer (VNA), is used to determine the dielectric properties of the materials under investigation. The comparative approach indicates that the mean relative error of FEM is smaller than that of NRW in terms of the corresponding S21 magnitude. The present calculation of the matrix/filler percentages endorses the exact amounts of substrate utilized in various physics applications. PMID:26474301
NASA Astrophysics Data System (ADS)
Ochoa Gutierrez, L. H.; Vargas Jimenez, C. A.; Niño Vasquez, L. F.
2011-12-01
The "Sabana de Bogota" (Bogota Savannah) is the most important social and economical center of Colombia. Almost the third of population is concentrated in this region and generates about the 40% of Colombia's Internal Brute Product (IBP). According to this, the zone presents an elevated vulnerability in case that a high destructive seismic event occurs. Historical evidences show that high magnitude events took place in the past with a huge damage caused to the city and indicate that is probable that such events can occur in the next years. This is the reason why we are working in an early warning generation system, using the first few seconds of a seismic signal registered by three components and wide band seismometers. Such system can be implemented using Computational Intelligence tools, designed and calibrated to the particular Geological, Structural and environmental conditions present in the region. The methods developed are expected to work on real time, thus suitable software and electronic tools need to be developed. We used Support Vector Machines Regression (SVMR) methods trained and tested with historic seismic events registered by "EL ROSAL" Station, located near Bogotá, calculating descriptors or attributes as the input of the model, from the first 6 seconds of signal. With this algorithm, we obtained less than 10% of mean absolute error and correlation coefficients greater than 85% in hypocentral distance and Magnitude estimation. With this results we consider that we can improve the method trying to have better accuracy with less signal time and that this can be a very useful model to be implemented directly in the seismological stations to generate a fast characterization of the event, broadcasting not only raw signal but pre-processed information that can be very useful for accurate Early Warning Generation.
Test of Understanding of Vectors: A Reliable Multiple-Choice Vector Concept Test
ERIC Educational Resources Information Center
Barniol, Pablo; Zavala, Genaro
2014-01-01
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended…
NASA Astrophysics Data System (ADS)
Cilden-Guler, Demet; Kaymaz, Zerefsan; Hajiyev, Chingiz
2018-01-01
In this study, different geomagnetic field models are compared in order to study the errors resulting from the representation of magnetic fields that affect the satellite attitude system. For this purpose, we used magnetometer data from two Low Earth Orbit (LEO) spacecraft and the geomagnetic models IGRF-12 (Thébault et al., 2015) and T89 (Tsyganenko, 1989) models to study the differences between the magnetic field components, strength and the angle between the predicted and observed vector magnetic fields. The comparisons were made during geomagnetically active and quiet days to see the effects of the geomagnetic storms and sub-storms on the predicted and observed magnetic fields and angles. The angles, in turn, are used to estimate the spacecraft attitude and hence, the differences between model and observations as well as between two models become important to determine and reduce the errors associated with the models under different space environment conditions. We show that the models differ from the observations even during the geomagnetically quiet times but the associated errors during the geomagnetically active times increase. We find that the T89 model gives closer predictions to the observations, especially during active times and the errors are smaller compared to the IGRF-12 model. The magnitude of the error in the angle under both environmental conditions was found to be less than 1°. For the first time, the geomagnetic models were used to address the effects of the near Earth space environment on the satellite attitude.
Hester, Robert; Murphy, Kevin; Brown, Felicity L; Skilleter, Ashley J
2010-11-17
Punishing an error to shape subsequent performance is a major tenet of individual and societal level behavioral interventions. Recent work examining error-related neural activity has identified that the magnitude of activity in the posterior medial frontal cortex (pMFC) is predictive of learning from an error, whereby greater activity in this region predicts adaptive changes in future cognitive performance. It remains unclear how punishment influences error-related neural mechanisms to effect behavior change, particularly in key regions such as pMFC, which previous work has demonstrated to be insensitive to punishment. Using an associative learning task that provided monetary reward and punishment for recall performance, we observed that when recall errors were categorized by subsequent performance--whether the failure to accurately recall a number-location association was corrected at the next presentation of the same trial--the magnitude of error-related pMFC activity predicted future correction. However, the pMFC region was insensitive to the magnitude of punishment an error received and it was the left insula cortex that predicted learning from the most aversive outcomes. These findings add further evidence to the hypothesis that error-related pMFC activity may reflect more than a prediction error in representing the value of an outcome. The novel role identified here for the insular cortex in learning from punishment appears particularly compelling for our understanding of psychiatric and neurologic conditions that feature both insular cortex dysfunction and a diminished capacity for learning from negative feedback or punishment.
Total error shift patterns for daily CT on rails image-guided radiotherapy to the prostate bed
2011-01-01
Background To evaluate the daily total error shift patterns on post-prostatectomy patients undergoing image guided radiotherapy (IGRT) with a diagnostic quality computer tomography (CT) on rails system. Methods A total of 17 consecutive post-prostatectomy patients receiving adjuvant or salvage IMRT using CT-on-rails IGRT were analyzed. The prostate bed's daily total error shifts were evaluated for a total of 661 CT scans. Results In the right-left, cranial-caudal, and posterior-anterior directions, 11.5%, 9.2%, and 6.5% of the 661 scans required no position adjustments; 75.3%, 66.1%, and 56.8% required a shift of 1 - 5 mm; 11.5%, 20.9%, and 31.2% required a shift of 6 - 10 mm; and 1.7%, 3.8%, and 5.5% required a shift of more than 10 mm, respectively. There was evidence of correlation between the x and y, x and z, and y and z axes in 3, 3, and 3 of 17 patients, respectively. Univariate (ANOVA) analysis showed that the total error pattern was random in the x, y, and z axis for 10, 5, and 2 of 17 patients, respectively, and systematic for the rest. Multivariate (MANOVA) analysis showed that the (x,y), (x,z), (y,z), and (x, y, z) total error pattern was random in 5, 1, 1, and 1 of 17 patients, respectively, and systematic for the rest. Conclusions The overall daily total error shift pattern for these 17 patients simulated with an empty bladder, and treated with CT on rails IGRT was predominantly systematic. Despite this, the temporal vector trends showed complex behaviors and unpredictable changes in magnitude and direction. These findings highlight the importance of using daily IGRT in post-prostatectomy patients. PMID:22024279
Currency crisis indication by using ensembles of support vector machine classifiers
NASA Astrophysics Data System (ADS)
Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee
2014-07-01
There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.
Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia
2018-04-01
We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30 Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150 Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9 dB and greatly reduces the EVM, given the same number of quantization bits.
NASA Astrophysics Data System (ADS)
Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang
2018-07-01
A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.
NASA Astrophysics Data System (ADS)
Rivière, G.; Hua, B. L.
2004-10-01
A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
Marathe, A R; Taylor, D M
2015-08-01
Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
NASA Astrophysics Data System (ADS)
Marathe, A. R.; Taylor, D. M.
2015-08-01
Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
Robust attitude control design for spacecraft under assigned velocity and control constraints.
Hu, Qinglei; Li, Bo; Zhang, Youmin
2013-07-01
A novel robust nonlinear control design under the constraints of assigned velocity and actuator torque is investigated for attitude stabilization of a rigid spacecraft. More specifically, a nonlinear feedback control is firstly developed by explicitly taking into account the constraints on individual angular velocity components as well as external disturbances. Considering further the actuator misalignments and magnitude deviation, a modified robust least-squares based control allocator is employed to deal with the problem of distributing the previously designed three-axis moments over the available actuators, in which the focus of this control allocation is to find the optimal control vector of actuators by minimizing the worst-case residual error using programming algorithms. The attitude control performance using the controller structure is evaluated through a numerical example. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kadaj, Roman
2016-12-01
The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.
Shear-Sensitive Liquid Crystal Coating Method Applied Through Transparent Test Surfaces
NASA Technical Reports Server (NTRS)
Reda, Daniel C.; Wilder, Michael C.
1999-01-01
Research conducted at NASA Ames Research Center has shown that the color-change response of a shear-sensitive liquid crystal coating (SSLCC) to aerodynamic shear depends on both the magnitude of the local shear vector and its direction relative to the observer's in-plane line of sight. In conventional applications, the surface of the SSLCC exposed to aerodynamic shear is illuminated with white light from the normal direction and observed from an oblique above-plane view angle of order 30 deg. In this top-light/top-view mode, shear vectors with components directed away from the observer cause the SSLCC to exhibit color-change responses. At any surface point, the maximum color change (measured from the no-shear red or orange color) always occurs when the local vector is aligned with, and directed away from, the observer. The magnitude of the color change at this vector-observer-aligned orientation scales directly with shear stress magnitude. Conversely, any surface point exposed to a shear vector with a component directed toward the observer exhibits a non-color-change response, always characterized by a rusty-red or brown color, independent of both shear magnitude and direction. These unique, highly directional color-change responses of SSLCCs to aerodynamic shear allow for the full-surface visualization and measurement of continuous shear stress vector distributions. The objective of the present research was to investigate application of the SSLCC method through a transparent test surface. In this new back-light/back-view mode, the exposed surface of the SSLCC would be subjected to aerodynamic shear stress while the contact surface between the SSLCC and the solid, transparent wall would be illuminated and viewed in the same geometrical arrangement as applied in conventional applications. It was unknown at the outset whether or not color-change responses would be observable from the contact surface of the SSLCC, and, if seen, how these color-change responses might relate to those observed in standard practice.
Zhang, Jiamei; Wang, Yan; Wu, Wenjing; Xu, Lulu; Li, Xiaojing; Dou, Rui
2015-01-24
To evaluate the refractive outcomes for the correction of low to moderate astigmatism up to 1 year following small incision lenticule extraction (SMILE) surgery. This retrospective study enrolled 98 eyes from 98 patients who underwent SMILE surgery for the correction of myopia and astigmatism. Only right eyes were included in this study to avoid the bias of orientation errors. The vector method was used to analyze the outcomes of astigmatism at 1 month, 6 months and 12 months after the procedure, including the double-angle plots, correction index (CI), index of success (IOS), angle of error (AofE) and magnitude of error (MofE). The effectiveness, safety, stability and predictability were also investigated during the 12-month follow-up. The preoperative cylinder ranged from -2.75 D to -0.25 D (average of -0.90±0.68 D), and the mean postoperative cylinder values were -0.24±0.29 D, -0.24±0.29 D, and -0.20±0.27 D at 1 month, 6 months, and 12 months, respectively. The mean astigmatism in vector form was -0.14 D×27.19° at 1 month, -0.13 D×27.29° at 6 months, and -0.10 D×28.63° at 12 months after surgery. The CI was 1.00±0.32 and IOS was 0.29±0.44 at the 12-month follow-up. Significant negative correlations were found between the CI and absolute target induced astigmatism (TIA) value, and positive correlations were found between the IOS and absolute AofE value (P<0.05). The MofE was limited within ±1.00 D at the 12-month follow-up. Fifty-six eyes (57.1%) gained one line in corrected distance visual acuity (CDVA) and five eyes (5.1%) gained two lines. There were no significant differences observed in the refractive outcomes among time points. SMILE surgery was effective and safe in correcting low to moderate astigmatism, and stable refractive outcomes were observed at the long-term follow-up. The undercorrection of astigmatism could possibly be influenced by attempted astigmatism correction preoperatively, the axis rotation during the surgery or wound healing postoperatively. This study suggested that nomograms should be adjusted in correcting astigmatism with SMILE surgery.
Extracting remanent magnetization from magnetic data inversion
NASA Astrophysics Data System (ADS)
Liu, S.; Fedi, M.; Baniamerian, J.; Hu, X.
2017-12-01
Remanent magnetization is an important vector parameter of rocks' and ores' magnetism, which is related to the intensity and direction of primary geomagnetic fields at all geological periods and hence shows critical evidences of geological tectonic movement and sedimentary evolution. We extract the remanence information from the distributions of the inverted magnetization vector. Firstly, directions of total magnetization vector are estimated from reduced-to-pole anomaly (max-min algorithm) and by its correlations with other magnitude magnetic transforms such as magnitude magnetic anomaly and normalized source strength. Then we invert data for the magnetization intensity and finally the intensity and direction of the remanent magnetization are separated from the total magnetization vector with a generalized formula of the apparent susceptibility based on a priori information on the Koenigsberger ratio. Our approach is used to investigate the targeted resources and geologic processes of the mining areas in China.
NASA Technical Reports Server (NTRS)
Lin, Qian; Allebach, Jan P.
1990-01-01
An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.
NASA Astrophysics Data System (ADS)
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
A short feature vector for image matching: The Log-Polar Magnitude feature descriptor
Hast, Anders; Wählby, Carolina; Sintorn, Ida-Maria
2017-01-01
The choice of an optimal feature detector-descriptor combination for image matching often depends on the application and the image type. In this paper, we propose the Log-Polar Magnitude feature descriptor—a rotation, scale, and illumination invariant descriptor that achieves comparable performance to SIFT on a large variety of image registration problems but with much shorter feature vectors. The descriptor is based on the Log-Polar Transform followed by a Fourier Transform and selection of the magnitude spectrum components. Selecting different frequency components allows optimizing for image patterns specific for a particular application. In addition, by relying only on coordinates of the found features and (optionally) feature sizes our descriptor is completely detector independent. We propose 48- or 56-long feature vectors that potentially can be shortened even further depending on the application. Shorter feature vectors result in better memory usage and faster matching. This combined with the fact that the descriptor does not require a time-consuming feature orientation estimation (the rotation invariance is achieved solely by using the magnitude spectrum of the Log-Polar Transform) makes it particularly attractive to applications with limited hardware capacity. Evaluation is performed on the standard Oxford dataset and two different microscopy datasets; one with fluorescence and one with transmission electron microscopy images. Our method performs better than SURF and comparable to SIFT on the Oxford dataset, and better than SIFT on both microscopy datasets indicating that it is particularly useful in applications with microscopy images. PMID:29190737
NASA Astrophysics Data System (ADS)
Duan, Wansuo; Zhao, Peng
2017-04-01
Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.
Vector scattering analysis of TPF coronagraph pupil masks
NASA Astrophysics Data System (ADS)
Ceperley, Daniel P.; Neureuther, Andrew R.; Lieber, Michael D.; Kasdin, N. Jeremy; Shih, Ta-Ming
2004-10-01
Rigorous finite-difference time-domain electromagnetic simulation is used to simulate the scattering from proto-typical pupil mask cross-section geometries and to quantify the differences from the normally assumed ideal on-off behavior. Shaped pupil plane masks are a promising technology for the TPF coronagraph mission. However the stringent requirements placed on the optics require that the detailed behavior of the edge-effects of these masks be examined carefully. End-to-end optical system simulation is essential and an important aspect is the polarization and cross-section dependent edge-effects which are the subject of this paper. Pupil plane masks are similar in many respects to photomasks used in the integrated circuit industry. Simulation capabilities such as the FDTD simulator, TEMPEST, developed for analyzing polarization and intensity imbalance effects in nonplanar phase-shifting photomasks, offer a leg-up in analyzing coronagraph masks. However, the accuracy in magnitude and phase required for modeling a chronograph system is extremely demanding and previously inconsequential errors may be of the same order of magnitude as the physical phenomena under study. In this paper, effects of thick masks, finite conductivity metals, and various cross-section geometries on the transmission of pupil-plane masks are illustrated. Undercutting the edge shape of Cr masks improves the effective opening width to within λ/5 of the actual opening but TE and TM polarizations require opposite compensations. The deviation from ideal is examined at the reference plane of the mask opening. Numerical errors in TEMPEST, such as numerical dispersion, perfectly matched layer reflections, and source haze are also discussed along with techniques for mitigating their impacts.
Experiments With Magnetic Vector Potential
ERIC Educational Resources Information Center
Skinner, J. W.
1975-01-01
Describes the experimental apparatus and method for the study of magnetic vector potential (MVP). Includes a discussion of inherent errors in the calculations involved, precision of the results, and further applications of MVP. (GS)
Burka, Jenna M; Bower, Kraig S; Cute, David L; Stutzman, Richard D; Subramanian, Prem S; Rabin, Jeff C
2005-04-01
To compare two methods of limbal marking used during laser refractive surgery for myopic astigmatism. Retrospective chart review. Forty-two eyes of 42 patients who underwent photorefractive keratectomy (PRK) or laser-assisted in-situ keratomileusis (LASIK) for myopic astigmatism were marked preoperatively to identify the horizontal axis. In 18 eyes, marks were placed at the slit lamp (SL) with the slit beam set at 180 degrees as a reference. In 24 eyes, marks were placed in the laser room (LR) immediately before reclining under the laser. All treatments were performed with the Alcon LADARVision excimer laser system. Vector analysis of postoperative cylinder and reduction in cylinder and uncorrected and best-corrected visual acuity were evaluated for both groups. The mean postoperative magnitude of error was -0.19 +/- 0.44 diopters for the LR group and -0.09 +/- 0.42 diopters for the SL group (P = .439, NS). Both groups had a mean angle of error indicating an overall counterclockwise rotation of axis with an angle of error of 6.3 +/- 8.7 degrees for the LR group and 8.0 +/- 10.2 degrees for the SL group (P = .562, NS). We found no significant difference in outcomes with an overall trend toward undercorrection of cylinder in both groups, leaving room for improvement after refractive surgery for myopic astigmatism.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
Correlation between polar values and vector analysis.
Naeser, K; Behrens, J K
1997-01-01
To evaluate the possible correlation between polar value and vector analysis assessment of surgically induced astigmatism. Department of Ophthalmology, Aalborg Sygehus Syd, Denmark. The correlation between polar values and vector analysis was evaluated by simple mathematical and optical methods using accepted principles of trigonometry and first-order optics. Vector analysis and polar values report different aspects of surgically induced astigmatism. Vector analysis describes the total astigmatic change, characterized by both astigmatic magnitude and direction, while the polar value method produces a single, reduced figure that reports flattening or steepening in preselected directions, usually the plane of the surgical meridian. There is a simple Pythagorean correlation between vector analysis and two polar values separated by an arch of 45 degrees. The polar value calculated in the surgical meridian indicates the power or the efficacy of the surgical procedure. The polar value calculated in a plane inclined 45 degrees to the surgical meridian indicates the degree of cylinder rotation induced by surgery. These two polar values can be used to obtain other relevant data such as magnitude, direction, and sphere of an induced cylinder. Consistent use of these methods will enable surgeons to control and in many cases reduce preoperative astigmatism.
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
Malone, Amelia S.; Fuchs, Lynn S.
2016-01-01
The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of committing errors. Students (n = 227) completed a 9-item ordering test. A high proportion (81%) of problems were completed incorrectly. Most (65% of) errors were due to students misapplying whole number logic to fractions. Fraction-magnitude estimation skill, but not part-whole understanding, significantly predicted the probability of committing this type of error. Implications for practice are discussed. PMID:26966153
Observation of Polarization-Locked Vector Solitons in an Optical Fiber
NASA Astrophysics Data System (ADS)
Cundiff, S. T.; Collings, B. C.; Akhmediev, N. N.; Soto-Crespo, J. M.; Bergman, K.; Knox, W. H.
1999-05-01
We observe polarization-locked vector solitons in a mode-locked fiber laser. Temporal vector solitons have components along both birefringent axes. Despite different phase velocities due to linear birefringence, the relative phase of the components is locked at +/-π/2. The value of +/-π/2 and component magnitudes agree with a simple analysis of the Kerr nonlinearity. These fragile phase-locked vector solitons have been the subject of much theoretical conjecture, but have previously eluded experimental observation.
NASA Astrophysics Data System (ADS)
Milione, Giovanni; Lavery, Martin P. J.; Huang, Hao; Ren, Yongxiong; Xie, Guodong; Nguyen, Thien An; Karimi, Ebrahim; Marrucci, Lorenzo; Nolan, Daniel A.; Alfano, Robert R.; Willner, Alan E.
2015-05-01
Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a q-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel (~1550nm), comprising an aggregate 80 Gbit/s, were transmitted ~1m over the lab table with <-16.4 dB (<2%) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties < 3.41dB.
Evaluation of the SPAR thermal analyzer on the CYBER-203 computer
NASA Technical Reports Server (NTRS)
Robinson, J. C.; Riley, K. M.; Haftka, R. T.
1982-01-01
The use of the CYBER 203 vector computer for thermal analysis is investigated. Strengths of the CYBER 203 include the ability to perform, in vector mode using a 64 bit word, 50 million floating point operations per second (MFLOPS) for addition and subtraction, 25 MFLOPS for multiplication and 12.5 MFLOPS for division. The speed of scalar operation is comparable to that of a CDC 7600 and is some 2 to 3 times faster than Langley's CYBER 175s. The CYBER 203 has 1,048,576 64-bit words of real memory with an 80 nanosecond (nsec) access time. Memory is bit addressable and provides single error correction, double error detection (SECDED) capability. The virtual memory capability handles data in either 512 or 65,536 word pages. The machine has 256 registers with a 40 nsec access time. The weaknesses of the CYBER 203 include the amount of vector operation overhead and some data storage limitations. In vector operations there is a considerable amount of time before a single result is produced so that vector calculation speed is slower than scalar operation for short vectors.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
A unified development of several techniques for the representation of random vectors and data sets
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1973-01-01
Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.
Strategies for P2P connectivity in reconfigurable converged wired/wireless access networks.
Puerto, Gustavo; Mora, José; Ortega, Beatriz; Capmany, José
2010-12-06
This paper presents different strategies to define the architecture of a Radio-Over-Fiber (RoF) Access networks enabling Peer-to-Peer (P2P) functionalities. The architectures fully exploit the flexibility of a wavelength router based on the feedback configuration of an Arrayed Waveguide Grating (AWG) and an optical switch to broadcast P2P services among diverse infrastructures featuring dynamic channel allocation and enabling an optical platform for 3G and beyond wireless backhaul requirements. The first architecture incorporates a tunable laser to generate a dedicated wavelength for P2P purposes and the second architecture takes advantage of reused wavelengths to enable the P2P connectivity among Optical Network Units (ONUs) or Base Stations (BS). While these two approaches allow the P2P connectivity in a one at a time basis (1:1), the third architecture enables the broadcasting of P2P sessions among different ONUs or BSs at the same time (1:M). Experimental assessment of the proposed architecture shows approximately 0.6% Error Vector Magnitude (EVM) degradation for wireless services and 1 dB penalty in average for 1 x 10(-12) Bit Error Rate (BER) for wired baseband services.
NASA Astrophysics Data System (ADS)
Tamilarasan, Ilavarasan; Saminathan, Brindha; Murugappan, Meenakshi
2016-04-01
The past decade has seen the phenomenal usage of orthogonal frequency division multiplexing (OFDM) in the wired as well as wireless communication domains, and it is also proposed in the literature as a future proof technique for the implementation of flexible resource allocation in cognitive optical networks. Fiber impairment assessment and adaptive compensation becomes critical in such implementations. A comprehensive analytical model for impairments in OFDM-based fiber links is developed. The proposed model includes the combined impact of laser phase fluctuations, fiber dispersion, self phase modulation, cross phase modulation, four-wave mixing, the nonlinear phase noise due to the interaction of amplified spontaneous emission with fiber nonlinearities, and the photodetector noises. The bit error rate expression for the proposed model is derived based on error vector magnitude estimation. The performance analysis of the proposed model is presented and compared for dispersion compensated and uncompensated backbone/backhaul links. The results suggest that OFDM would perform better for uncompensated links than the compensated links due to the negligible FWM effects and there is a need for flexible compensation. The proposed model can be employed in cognitive optical networks for accurate assessment of fiber-related impairments.
Comparison of surgically induced astigmatism following different glaucoma operations.
Tanito, Masaki; Matsuzaki, Yukari; Ikeda, Yoshifumi; Fujihara, Etsuko
2017-01-01
To compare surgically induced astigmatism (SIA) among glaucomatous eyes treated with trabeculectomy (LEC), EX-PRESS ® shunt (EXP), ab externo trabeculotomy (exLOT), or microhook ab interno trabeculotomy (μLOT). Eighty right eyes of 80 subjects who underwent LEC (n=20), EXP (n=20), exLOT (n=20), or μLOT (n=20) were included. The dataset including the best-corrected visual acuity (BCVA), intraocular pressure (IOP), and keratometry recordings preoperatively and 3 months postoperatively was collected by chart review. The means of the vector magnitude, vector meridian, and arithmetic magnitude of the preoperative and postoperative astigmatism and SIA were calculated. The correlations among the SIA magnitude, postoperative BCVA, and IOP were assessed. The mean astigmatic arithmetic magnitudes did not differ significantly ( P =0.0732) preoperatively among the four groups, but the magnitude was significantly ( P =0.0002) greater in the LEC group than the other groups postoperatively. The mean SIA vectors were calculated to be 1.01 D at 56°, 0.62 D at 74°, 0.23 D at 112°, and 0.12 D at 97° for the LEC, EXP, exLOT, and μLOT groups, respectively. The mean SIA arithmetic magnitudes were significantly ( P <0.0001) greater in the LEC group than the other groups. Three months postoperatively, the SIA magnitude was correlated positively with the logarithm of the minimum angle of resolution (logMAR) BCVA ( r =0.3538) and negatively with the IOP ( r =-0.3265); the logMAR BCVA was correlated negatively with the IOP ( r =-0.3105). EXP, exLOT, and μLOT induce less corneal astigmatism than LEC in the early postoperative period.
Comparison of surgically induced astigmatism following different glaucoma operations
Tanito, Masaki; Matsuzaki, Yukari; Ikeda, Yoshifumi; Fujihara, Etsuko
2017-01-01
Aim To compare surgically induced astigmatism (SIA) among glaucomatous eyes treated with trabeculectomy (LEC), EX-PRESS® shunt (EXP), ab externo trabeculotomy (exLOT), or microhook ab interno trabeculotomy (μLOT). Subjects and methods Eighty right eyes of 80 subjects who underwent LEC (n=20), EXP (n=20), exLOT (n=20), or μLOT (n=20) were included. The dataset including the best-corrected visual acuity (BCVA), intraocular pressure (IOP), and keratometry recordings preoperatively and 3 months postoperatively was collected by chart review. The means of the vector magnitude, vector meridian, and arithmetic magnitude of the preoperative and postoperative astigmatism and SIA were calculated. The correlations among the SIA magnitude, postoperative BCVA, and IOP were assessed. Results The mean astigmatic arithmetic magnitudes did not differ significantly (P=0.0732) preoperatively among the four groups, but the magnitude was significantly (P=0.0002) greater in the LEC group than the other groups postoperatively. The mean SIA vectors were calculated to be 1.01 D at 56°, 0.62 D at 74°, 0.23 D at 112°, and 0.12 D at 97° for the LEC, EXP, exLOT, and μLOT groups, respectively. The mean SIA arithmetic magnitudes were significantly (P<0.0001) greater in the LEC group than the other groups. Three months postoperatively, the SIA magnitude was correlated positively with the logarithm of the minimum angle of resolution (logMAR) BCVA (r=0.3538) and negatively with the IOP (r=−0.3265); the logMAR BCVA was correlated negatively with the IOP (r=−0.3105). Conclusion EXP, exLOT, and μLOT induce less corneal astigmatism than LEC in the early postoperative period. PMID:29238159
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
A high-order time-accurate interrogation method for time-resolved PIV
NASA Astrophysics Data System (ADS)
Lynch, Kyle; Scarano, Fulvio
2013-03-01
A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In both cases, it is demonstrated that the measurement time interval can be significantly extended without compromising the correlation signal-to-noise ratio and with no increase of the truncation error. The increase of velocity dynamic range scales more than linearly with the number of frames included for the analysis, which supersedes by one order of magnitude the pair correlation by window deformation. The main factors influencing the performance of the method are discussed, namely the number of images composing the sequence and the polynomial order chosen to represent the motion throughout the trajectory.
Flow Instability and Wall Shear Stress Ocillation in Intracranial Aneurysms
NASA Astrophysics Data System (ADS)
Baek, Hyoungsu; Jayamaran, Mahesh; Richardson, Peter; Karniadakis, George
2009-11-01
We investigate the flow dynamics and oscillatory behavior of wall shear stress (WSS) vectors in intracranial aneurysms using high-order spectral/hp simulations. We analyze four patient- specific internal carotid arteries laden with aneurysms of different characteristics : a wide-necked saccular aneurysm, a hemisphere-shaped aneurysm, a narrower-necked saccular aneurysm, and a case with two adjacent saccular aneurysms. Simulations show that the pulsatile flow in aneurysms may be subject to a hydrodynamic instability during the decelerating systolic phase resulting in a high-frequency oscillation in the range of 30-50 Hz. When the aneurysmal flow becomes unstable, both the magnitude and the directions of WSS vectors fluctuate. In particular, the WSS vectors around the flow impingement region exhibit significant spatial and temporal changes in direction as well as in magnitude.
Phillips, J.D.; Nabighian, M.N.; Smith, D.V.; Li, Y.
2007-01-01
The Helbig method for estimating total magnetization directions of compact sources from magnetic vector components is extended so that tensor magnetic gradient components can be used instead. Depths of the compact sources can be estimated using the Euler equation, and their dipole moment magnitudes can be estimated using a least squares fit to the vector component or tensor gradient component data. ?? 2007 Society of Exploration Geophysicists.
Definition of Contravariant Velocity Components
NASA Technical Reports Server (NTRS)
Hung, Ching-moa; Kwak, Dochan (Technical Monitor)
2002-01-01
In this paper we have reviewed the basics of tensor analysis in an attempt to clarify some misconceptions regarding contravariant and covariant vector components as used in fluid dynamics. We have indicated that contravariant components are components of a given vector expressed as a unique combination of the covariant base vector system and, vice versa, that the covariant components are components of a vector expressed with the contravariant base vector system. Mathematically, expressing a vector with a combination of base vector is a decomposition process for a specific base vector system. Hence, the contravariant velocity components are decomposed components of velocity vector along the directions of coordinate lines, with respect to the covariant base vector system. However, the contravariant (and covariant) components are not physical quantities. Their magnitudes and dimensions are controlled by their corresponding covariant (and contravariant) base vectors.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung
2016-05-01
We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions.
Creating analytically divergence-free velocity fields from grid-based data
NASA Astrophysics Data System (ADS)
Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.
2016-10-01
We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.
NASA Technical Reports Server (NTRS)
Mcclure, P.
1973-01-01
An analytical theory is developed to describe diurnal polar motion in the earth which arises as a forced response due to lunisolar torques and tidal deformation. Doodson's expansion of the tide generating potential is used to represent the lunisolar torques. Both the magnitudes and the rates of change of perturbations in the earth's inertia tensor are included in the dynamical equations for the polar motion so as to account for rotational and tidal deformation. It is found that in a deformable earth with Love's number k = 0.29, the angular momentum vector departs by as much as 20 cm from the rotation axis rather than remaining within 1 or 2 cm as it would in a rigid earth. This 20 cm separation is significant in the interpretation of submeter polar motion observations because it necessitates an additional coordinate transformation in order to remove what would otherwise be a 20 cm error source in the conversion between inertial and terrestrial reference systems.
NASA Astrophysics Data System (ADS)
Keçeli, Murat; Hirata, So
2010-09-01
The mod- n scheme is introduced to the coupled-cluster singles and doubles (CCSD) and third-order Møller-Plesset perturbation (MP3) methods for extended systems of one-dimensional periodicity. By downsampling uniformly the wave vectors in Brillouin-zone integrations, this scheme accelerates these accurate but expensive correlation-energy calculations by two to three orders of magnitude while incurring negligible errors in their total and relative energies. To maintain this accuracy, the number of the nearest-neighbor unit cells included in the lattice sums must also be reduced by the same downsampling rate (n) . The mod- n CCSD and MP3 methods are applied to the potential-energy surface of polyethylene in anharmonic frequency calculations of its infrared- and Raman-active vibrations. The calculated frequencies are found to be within 46cm-1 (CCSD) and 78cm-1 (MP3) of the observed.
Correlation and 3D-tracking of objects by pointing sensors
Griesmeyer, J. Michael
2017-04-04
A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.
Shim, Jae Kun; Karol, Sohit; Hsu, Jeffrey; de Oliveira, Marcio Alves
2008-04-01
The aim of this study was to investigate the contralateral motor overflow in children during single-finger and multi-finger maximum force production tasks. Forty-five right handed children, 5-11 years of age produced maximum isometric pressing force in flexion or extension with single fingers or all four fingers of their right hand. The forces produced by individual fingers of the right and left hands were recorded and analyzed in four-dimensional finger force vector space. The results showed that increases in task (right) hand finger forces were linearly associated with non-task (left) hand finger forces. The ratio of the non-task hand finger force magnitude to the corresponding task hand finger force magnitude, termed motor overflow magnitude (MOM), was greater in extension than flexion. The index finger flexion task showed the smallest MOM values. The similarity between the directions of task hand and non-task hand finger force vectors in four-dimensional finger force vector space, termed motor overflow direction (MOD), was the greatest for index and smallest for little finger tasks. MOM of a four-finger task was greater than the sum of MOMs of single-finger tasks, and this phenomenon was termed motor overflow surplus. Contrary to previous studies, no single-finger or four-finger tasks showed significant changes of MOM or MOD with the age of children. We conclude that the contralateral motor overflow in children during finger maximum force production tasks is dependent upon the task fingers and the magnitude and direction of task finger forces.
Problems in evaluating radiation dose via terrestrial and aquatic pathways.
Vaughan, B E; Soldat, J K; Schreckhise, R G; Watson, E C; McKenzie, D H
1981-01-01
This review is concerned with exposure risk and the environmental pathways models used for predictive assessment of radiation dose. Exposure factors, the adequacy of available data, and the model subcomponents are critically reviewed from the standpoint of absolute error propagation. Although the models are inherently capable of better absolute accuracy, a calculated dose is usually overestimated by from two to six orders of magnitude, in practice. The principal reason for so large an error lies in using "generic" concentration ratios in situations where site specific data are needed. Major opinion of the model makers suggests a number midway between these extremes, with only a small likelihood of ever underestimating the radiation dose. Detailed evaluations are made of source considerations influencing dose (i.e., physical and chemical status of released material); dispersal mechanisms (atmospheric, hydrologic and biotic vector transport); mobilization and uptake mechanisms (i.e., chemical and other factors affecting the biological availability of radioelements); and critical pathways. Examples are shown of confounding in food-chain pathways, due to uncritical application of concentration ratios. Current thoughts of replacing the critical pathways approach to calculating dose with comprehensive model calculations are also shown to be ill-advised, given present limitations in the comprehensive data base. The pathways models may also require improved parametrization, as they are not at present structured adequately to lend themselves to validation. The extremely wide errors associated with predicting exposure stand in striking contrast to the error range associated with the extrapolation of animal effects data to the human being. PMID:7037381
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Characteristics of a dry, pulsating microburst at Denver Stapleton Airport
NASA Technical Reports Server (NTRS)
Proctor, Fred H.
1994-01-01
This study examines the influence of ambient vertical wind shear on microburst intensity, asymmetry, and translation. Results show that microburst asymmetry is influenced by the magnitude of the low-level ambient vertical shear. The microburst outflow elongates in the direction of the shear vector (which is not necessarily in the direction of translation), and generates the greatest hazard (for commercial jet transports) along paths orthogonal to the shear vector. The model results also show that the asymmetry increases with increasing shear magnitude. One implication of these results concerns the detection of a microburst by a ground-based doppler systems. These systems may underestimate the hazard for landing and departing aircraft that are on trajectories orthogonal to both the sensor beam and shear vector, especially if the magnitude of the shear is large. Another implication is that microburst are more likely to be asymmetrical in regions (seasons) where there is climatologically a significant low-level shear. The model results also show that the rotor microbursts and severe wind damage can be a product of the microburst interaction with strong ambient wind shear.
Video Vectorization via Tetrahedral Remeshing.
Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping
2017-02-09
We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.
Calibration Errors in Interferometric Radio Polarimetry
NASA Astrophysics Data System (ADS)
Hales, Christopher A.
2017-08-01
Residual calibration errors are difficult to predict in interferometric radio polarimetry because they depend on the observational calibration strategy employed, encompassing the Stokes vector of the calibrator and parallactic angle coverage. This work presents analytic derivations and simulations that enable examination of residual on-axis instrumental leakage and position-angle errors for a suite of calibration strategies. The focus is on arrays comprising alt-azimuth antennas with common feeds over which parallactic angle is approximately uniform. The results indicate that calibration schemes requiring parallactic angle coverage in the linear feed basis (e.g., the Atacama Large Millimeter/submillimeter Array) need only observe over 30°, beyond which no significant improvements in calibration accuracy are obtained. In the circular feed basis (e.g., the Very Large Array above 1 GHz), 30° is also appropriate when the Stokes vector of the leakage calibrator is known a priori, but this rises to 90° when the Stokes vector is unknown. These findings illustrate and quantify concepts that were previously obscure rules of thumb.
Dynamic visual attention: motion direction versus motion magnitude
NASA Astrophysics Data System (ADS)
Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.
2008-02-01
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
Signal location using generalized linear constraints
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.; Feldman, D. D.
1992-01-01
This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.
AveBoost2: Boosting for Noisy Data
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.
2004-01-01
AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.
López-Alvarenga, J C; Sobrino-Cossío, S; Remes-Troche, J M; Chiu-Ugalde, J; Vargas-Romero, J A; Schmulson, M
2013-01-01
Irritable Bowel Syndrome (IBS) is a disorder characterized by abdominal pain or discomfort associated with changes in bowel habit. Currently there are no objective outcome measures for evaluating the effectiveness of treatments for this disorder. To determine the usefulness of a method of analysis that employs polar vectors to evaluate the effectiveness of IBS treatments. Data from a Phase IV clinical study with 1677 active IBS-Rome III patients who received 100mg of pinaverium bromide+300mg of simethicone (PB+S) po bid for a period of four weeks were used for the analysis. Using the Bristol Stool Scale as a reference, the consistency and frequency of each type of bowel movement were recorded weekly in a Bristol Matrix (BM) and the data were expressed as polar vectors. The analysis showed a differential response to the PB+S treatment among the IBS subtypes: in reference to the IBS with constipation subtype, the magnitude of the vector increased from 10.2 to 12.5, reaching maximum improvement at two weeks of treatment (p<0.05, Scheffé). In the IBS with diarrhea and mixed IBS subtypes, the magnitude of the vector decreased from 19 to 14 (p<0.05) and from 16.5 to 13 (p<0.05), respectively, with continuous improvement for a period of four weeks. There was no definable vectorial pattern in the unsubtyped IBS group. Analysis with polar vectors enables treatment response to be measured in different IBS subtypes. All the groups showed improvement with PB+S, but each one had its own characteristic response in relation to vector magnitude and direction. The proposed method can be implemented in clinical studies to evaluate the efficacy of IBS treatments. Copyright © 2012 Asociación Mexicana de Gastroenterología. Published by Masson Doyma México S.A. All rights reserved.
Sensorimotor Learning of Acupuncture Needle Manipulation Using Visual Feedback
Jung, Won-Mo; Lim, Jinwoong; Lee, In-Seon; Park, Hi-Joon; Wallraven, Christian; Chae, Younbyoung
2015-01-01
Objective Humans can acquire a wide variety of motor skills using sensory feedback pertaining to discrepancies between intended and actual movements. Acupuncture needle manipulation involves sophisticated hand movements and represents a fundamental skill for acupuncturists. We investigated whether untrained students could improve their motor performance during acupuncture needle manipulation using visual feedback (VF). Methods Twenty-one untrained medical students were included, randomly divided into concurrent (n = 10) and post-trial (n = 11) VF groups. Both groups were trained in simple lift/thrusting techniques during session 1, and in complicated lift/thrusting techniques in session 2 (eight training trials per session). We compared the motion patterns and error magnitudes of pre- and post-training tests. Results During motion pattern analysis, both the concurrent and post-trial VF groups exhibited greater improvements in motion patterns during the complicated lifting/thrusting session. In the magnitude error analysis, both groups also exhibited reduced error magnitudes during the simple lifting/thrusting session. For the training period, the concurrent VF group exhibited reduced error magnitudes across all training trials, whereas the post-trial VF group was characterized by greater error magnitudes during initial trials, which gradually reduced during later trials. Conclusions Our findings suggest that novices can improve the sophisticated hand movements required for acupuncture needle manipulation using sensorimotor learning with VF. Use of two types of VF can be beneficial for untrained students in terms of learning how to manipulate acupuncture needles, using either automatic or cognitive processes. PMID:26406248
Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak; ...
2017-12-20
In this addendum to Phys. Rev. D 95, 054018 (2017) we recompute the rates for the decays of the Higgs boson to a vector quarkonium plus a photon, where the vector quarkonium is J/psi, Upsilon(1S) Upsilon(2S). We correct an error in the Abel-Pad'e summation formula that was used to carry out the evolution of the quarkonium light-cone distribution amplitude in Phys. Rev. D 95, 054018 (2017). We also correct an error in the scale of quarkonium wave function at the origin in Phys. Rev. D 95, 054018 (2017) and introduce several additional refinements in the calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak
In this addendum to Phys. Rev. D 95, 054018 (2017) we recompute the rates for the decays of the Higgs boson to a vector quarkonium plus a photon, where the vector quarkonium is J/psi, Upsilon(1S) Upsilon(2S). We correct an error in the Abel-Pad'e summation formula that was used to carry out the evolution of the quarkonium light-cone distribution amplitude in Phys. Rev. D 95, 054018 (2017). We also correct an error in the scale of quarkonium wave function at the origin in Phys. Rev. D 95, 054018 (2017) and introduce several additional refinements in the calculation.
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would minimize the OBC propagation error. This techniques should greatly improve the accuracy of the OBC propagation on-board future spacecraft such as TRMM, WIRE, SWAS, and XTE without increasing complexity in the ground processing.
Calibration of low-temperature ac susceptometers with a copper cylinder standard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, D.-X.; Skumryev, V.
2010-02-15
A high-quality low-temperature ac susceptometer is calibrated by comparing the measured ac susceptibility of a copper cylinder with its eddy-current ac susceptibility accurately calculated. Different from conventional calibration techniques that compare the measured results with the known property of a standard sample at certain fixed temperature T, field amplitude H{sub m}, and frequency f, to get a magnitude correction factor, here, the electromagnetic properties of the copper cylinder are unknown and are determined during the calibration of the ac susceptometer in the entire T, H{sub m}, and f range. It is shown that the maximum magnitude error and the maximummore » phase error of the susceptometer are less than 0.7% and 0.3 deg., respectively, in the region T=5-300 K and f=111-1111 Hz at H{sub m}=800 A/m, after a magnitude correction by a constant factor as done in a conventional calibration. However, the magnitude and phase errors can reach 2% and 4.3 deg. at 10 000 and 11 Hz, respectively. Since the errors are reproducible, a large portion of them may be further corrected after a calibration, the procedure for which is given. Conceptual discussions concerning the error sources, comparison with other calibration methods, and applications of ac susceptibility techniques are presented.« less
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.
Zhang, Wanhong; Zhou, Tong
2015-01-01
Motivation Identifying gene regulatory networks (GRNs) which consist of a large number of interacting units has become a problem of paramount importance in systems biology. Situations exist extensively in which causal interacting relationships among these units are required to be reconstructed from measured expression data and other a priori information. Though numerous classical methods have been developed to unravel the interactions of GRNs, these methods either have higher computing complexities or have lower estimation accuracies. Note that great similarities exist between identification of genes that directly regulate a specific gene and a sparse vector reconstruction, which often relates to the determination of the number, location and magnitude of nonzero entries of an unknown vector by solving an underdetermined system of linear equations y = Φx. Based on these similarities, we propose a novel framework of sparse reconstruction to identify the structure of a GRN, so as to increase accuracy of causal regulation estimations, as well as to reduce their computational complexity. Results In this paper, a sparse reconstruction framework is proposed on basis of steady-state experiment data to identify GRN structure. Different from traditional methods, this approach is adopted which is well suitable for a large-scale underdetermined problem in inferring a sparse vector. We investigate how to combine the noisy steady-state experiment data and a sparse reconstruction algorithm to identify causal relationships. Efficiency of this method is tested by an artificial linear network, a mitogen-activated protein kinase (MAPK) pathway network and the in silico networks of the DREAM challenges. The performance of the suggested approach is compared with two state-of-the-art algorithms, the widely adopted total least-squares (TLS) method and those available results on the DREAM project. Actual results show that, with a lower computational cost, the proposed method can significantly enhance estimation accuracy and greatly reduce false positive and negative errors. Furthermore, numerical calculations demonstrate that the proposed algorithm may have faster convergence speed and smaller fluctuation than other methods when either estimate error or estimate bias is considered. PMID:26207991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gubbiotti, G.; Tacchi, S.; Montoncello, F.
2015-06-29
The Brillouin light scattering technique has been exploited to study the angle-resolved spin wave band diagrams of squared Permalloy antidot lattice. Frequency dispersion of spin waves has been measured for a set of fixed wave vector magnitudes, while varying the wave vector in-plane orientation with respect to the applied magnetic field. The magnonic band gap between the two most dispersive modes exhibits a minimum value at an angular position, which exclusively depends on the product between the selected wave vector magnitude and the lattice constant of the array. The experimental data are in very good agreement with predictions obtained bymore » dynamical matrix method calculations. The presented results are relevant for magnonic devices where the antidot lattice, acting as a diffraction grating, is exploited to achieve multidirectional spin wave emission.« less
Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids
Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,
2000-01-01
Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.
Selection vector filter framework
NASA Astrophysics Data System (ADS)
Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.
2003-10-01
We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.
Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J
2016-12-08
The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bruni, Marco; Thomas, Daniel B.; Wands, David
2014-02-01
We present the first calculation of an intrinsically relativistic quantity, the leading-order correction to Newtonian theory, in fully nonlinear cosmological large-scale structure studies. Traditionally, nonlinear structure formation in standard ΛCDM cosmology is studied using N-body simulations, based on Newtonian gravitational dynamics on an expanding background. When one derives the Newtonian regime in a way that is a consistent approximation to the Einstein equations, the first relativistic correction to the usual Newtonian scalar potential is a gravitomagnetic vector potential, giving rise to frame dragging. At leading order, this vector potential does not affect the matter dynamics, thus it can be computed from Newtonian N-body simulations. We explain how we compute the vector potential from simulations in ΛCDM and examine its magnitude relative to the scalar potential, finding that the power spectrum of the vector potential is of the order 10-5 times the scalar power spectrum over the range of nonlinear scales we consider. On these scales the vector potential is up to two orders of magnitudes larger than the value predicted by second-order perturbation theory extrapolated to the same scales. We also discuss some possible observable effects and future developments.
NASA Astrophysics Data System (ADS)
Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi
2017-01-01
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.
Demonstrating the Direction of Angular Velocity in Circular Motion
ERIC Educational Resources Information Center
Demircioglu, Salih; Yurumezoglu, Kemal; Isik, Hakan
2015-01-01
Rotational motion is ubiquitous in nature, from astronomical systems to household devices in everyday life to elementary models of atoms. Unlike the tangential velocity vector that represents the instantaneous linear velocity (magnitude and direction), an angular velocity vector is conceptually more challenging for students to grasp. In physics…
Basic linear algebra subprograms for FORTRAN usage
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Hanson, R. J.; Kincaid, D. R.; Krogh, F. T.
1977-01-01
A package of 38 low level subprograms for many of the basic operations of numerical linear algebra is presented. The package is intended to be used with FORTRAN. The operations in the package are dot products, elementary vector operations, Givens transformations, vector copy and swap, vector norms, vector scaling, and the indices of components of largest magnitude. The subprograms and a test driver are available in portable FORTRAN. Versions of the subprograms are also provided in assembly language for the IBM 360/67, the CDC 6600 and CDC 7600, and the Univac 1108.
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
A hybrid frame concealment algorithm for H.264/AVC.
Yan, Bo; Gharavi, Hamid
2010-01-01
In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.
Trial-to-trial adaptation in control of arm reaching and standing posture
Pienciak-Siewert, Alison; Horan, Dylan P.
2016-01-01
Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. PMID:27683888
Trial-to-trial adaptation in control of arm reaching and standing posture.
Pienciak-Siewert, Alison; Horan, Dylan P; Ahmed, Alaa A
2016-12-01
Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. Copyright © 2016 the American Physiological Society.
Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
Proprioception Is Robust under External Forces
Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.
2013-01-01
Information from cutaneous, muscle and joint receptors is combined with efferent information to create a reliable percept of the configuration of our body (proprioception). We exposed the hand to several horizontal force fields to examine whether external forces influence this percept. In an end-point task subjects reached visually presented positions with their unseen hand. In a vector reproduction task, subjects had to judge a distance and direction visually and reproduce the corresponding vector by moving the unseen hand. We found systematic individual errors in the reproduction of the end-points and vectors, but these errors did not vary systematically with the force fields. This suggests that human proprioception accounts for external forces applied to the hand when sensing the position of the hand in the horizontal plane. PMID:24019959
Instrument Pointing Capabilities: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Blackmore, Lars; Murray, Emmanuell; Scharf, Daniel P.; Aung, Mimi; Bayard, David; Brugarolas, Paul; Hadaegh, Fred; Lee, Allan; Milman, Mark; Sirlin, Sam;
2011-01-01
This paper surveys the instrument pointing capabilities of past, present and future space telescopes and interferometers. As an important aspect of this survey, we present a taxonomy for "apples-to-apples" comparisons of pointing performances. First, pointing errors are defined relative to either an inertial frame or a celestial target. Pointing error can then be further sub-divided into DC, that is, steady state, and AC components. We refer to the magnitude of the DC error relative to the inertial frame as absolute pointing accuracy, and we refer to the magnitude of the DC error relative to a celestial target as relative pointing accuracy. The magnitude of the AC error is referred to as pointing stability. While an AC/DC partition is not new, we leverage previous work by some of the authors to quantitatively clarify and compare varying definitions of jitter and time window averages. With this taxonomy and for sixteen past, present, and future missions, pointing accuracies and stabilities, both required and achieved, are presented. In addition, we describe the attitude control technologies used to and, for future missions, planned to achieve these pointing performances.
A retrospective analysis of children with anisometropic amblyopia in Nepal.
Sapkota, Kishor
2014-06-01
Anisometropia is one of the main causes of amblyopia. This study was conducted to investigate the association between the depth of amblyopia and the magnitude of anisometropia. A retrospective record review was conducted at the Nepal Eye Hospital between July 2006 and June 2011. Those children included in this study were aged ≤13 years and diagnosed with unilateral anisometropic amblyopia, no strabismus and ocular pathology. Associations between the depth of amblyopia and the age and/or gender of the subjects, the laterality of the amblyopic eyes, the type and magnitude of refractive error of amblyopic eyes, and the magnitude of anisometropia were statistically analyzed. Out of the 189 children with unilateral anisometropic amblyopia (mean age 9.1 ± 2.8 years), 59% were boys. Amblyopia was more commonly found in left eye (p < 0.001). The most common type of refractive error was astigmatism (61%). The depth of amblyopia was not associated with the gender (p = 0.864) or age (p = 0.341) of the subjects or the laterality of the eyes (p = 0.159), but it was associated with the type (p = 0.049) and magnitude (p = 0.013) of refractive error of the amblyopic eye and the magnitude of anisometropia (p = 0.002). Nepalese anisometropic amblyopic children were presented late to hospital. The depth of amblyopia was highly associated with the type and magnitude of refractive error of the amblyopic eye and the magnitude of anisometropia. So, basic vision screening programs may help to find out the anisometropic children and reefer them to the hospital for timely management of anisometropic amblyopia if present.
Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong
2015-12-02
For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
NASA Astrophysics Data System (ADS)
Zhou, Wen; Qin, Chaoyi
2017-09-01
We demonstrate multi-frequency QPSK millimeter-wave (mm-wave) vector signal generation enabled by MZM-based optical carrier suppression (OCS) modulation and in-phase/quadrature (I/Q) modulation. We numerically simulate the generation of 40-, 80- and 120-GHz vector signal. Here, the three different signals carry the same QPSK modulation information. We also experimentally realize 11Gbaud/s QPSK vector signal transmission over 20 km fiber, and the generation of the vector signals at 40-GHz, 80-GHz and 120-GHz. The experimental results show that the bit-error-rate (BER) for all the three different signals can reach the forward-error-correction (FEC) threshold of 3.8×10-3. The advantage of the proposed system is that provide high-speed, high-bandwidth and high-capacity seamless access of TDM and wireless network. These features indicate the important application prospect in wireless access networks for WiMax, Wi-Fi and 5G/LTE.
Three-dimensional flow measurements in a vaneless radial turbine scroll
NASA Technical Reports Server (NTRS)
Tabakoff, W.; Wood, B.; Vittal, B. V. R.
1982-01-01
The flow behavior in a vaneless radial turbine scroll was examined experimentally. The data was obtained using the slant sensor technique of hot film anemometry. This method used the unsymmetric heat transfer characteristics of a constant temperature hot film sensor to detect the flow direction and magnitude. This was achieved by obtaining a velocity vector measurement at three sensor positions with respect to the flow. The true magnitude and direction of the velocity vector was then found using these values and a Newton-Raphson numerical technique. The through flow and secondary flow velocity components are measured at various points in three scroll sections.
Dipole interaction of the Quincke rotating particles.
Dolinsky, Yu; Elperin, T
2012-02-01
We study the behavior of particles having a finite electric permittivity and conductivity in a weakly conducting fluid under the action of the external electric field. We consider the case when the strength of the external electric field is above the threshold, and particles rotate due to the Quincke effect. We determine the magnitude of the dipole interaction of the Quincke rotating particles and the shift of frequency of the Quincke rotation caused by the dipole interaction between the particles. It is demonstrated that depending on the mutual orientation of the vectors of angular velocities of particles, vector-directed along the straight line between the centers of the particles and the external electric field strength vector, particles can attract or repel each other. In contrast to the case of nonrotating particles when the magnitude of the dipole interaction increases with the increase of the strength of the external electric field, the magnitude of the dipole interaction of the Quincke rotating particles either does not change or decreases with the increase of the strength of the external electric field depending on the strength of the external electric field and electrodynamic parameters of the particles.
Dipole interaction of the Quincke rotating particles
NASA Astrophysics Data System (ADS)
Dolinsky, Yu.; Elperin, T.
2012-02-01
We study the behavior of particles having a finite electric permittivity and conductivity in a weakly conducting fluid under the action of the external electric field. We consider the case when the strength of the external electric field is above the threshold, and particles rotate due to the Quincke effect. We determine the magnitude of the dipole interaction of the Quincke rotating particles and the shift of frequency of the Quincke rotation caused by the dipole interaction between the particles. It is demonstrated that depending on the mutual orientation of the vectors of angular velocities of particles, vector-directed along the straight line between the centers of the particles and the external electric field strength vector, particles can attract or repel each other. In contrast to the case of nonrotating particles when the magnitude of the dipole interaction increases with the increase of the strength of the external electric field, the magnitude of the dipole interaction of the Quincke rotating particles either does not change or decreases with the increase of the strength of the external electric field depending on the strength of the external electric field and electrodynamic parameters of the particles.
Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang
2015-01-01
This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes. PMID:25815450
Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang
2015-03-25
This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes--the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC--were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes.
Effects of the Ionosphere on Passive Microwave Remote Sensing of Ocean Salinity from Space
NASA Technical Reports Server (NTRS)
LeVine, D. M.; Abaham, Saji; Hildebrand, Peter H. (Technical Monitor)
2001-01-01
Among the remote sensing applications currently being considered from space is the measurement of sea surface salinity. The salinity of the open ocean is important for understanding ocean circulation and for modeling energy exchange with the atmosphere. Passive microwave remote sensors operating near 1.4 GHz (L-band) could provide data needed to fill the gap in current coverage and to complement in situ arrays being planned to provide subsurface profiles in the future. However, the dynamic range of the salinity signal in the open ocean is relatively small and propagation effects along the path from surface to sensor must be taken into account. In particular, Faraday rotation and even attenuation/emission in the ionosphere can be important sources of error. The purpose or this work is to estimate the magnitude of these effects in the context of a future remote sensing system in space to measure salinity in L-band. Data will be presented as a function of time location and solar activity using IRI-95 to model the ionosphere. The ionosphere presents two potential sources of error for the measurement of salinity: Rotation of the polarization vector (Faraday rotation) and attenuation/emission. Estimates of the effect of these two phenomena on passive remote sensing over the oceans at L-band (1.4 GHz) are presented.
Optimizing correlation techniques for improved earthquake location
Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.
2004-01-01
Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.
[Orthogonal Vector Projection Algorithm for Spectral Unmixing].
Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li
2015-12-01
Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.
Saunders, Kathryn J; Little, Julie-Anne; McClelland, Julie F; Jackson, A Jonathan
2010-06-01
To describe refractive status in children and young adults with cerebral palsy (CP) and relate refractive error to standardized measures of type and severity of CP impairment and to ocular dimensions. A population-based sample of 118 participants aged 4 to 23 years with CP (mean 11.64 +/- 4.06) and an age-appropriate control group (n = 128; age, 4-16 years; mean, 9.33 +/- 3.52) were recruited. Motor impairment was described with the Gross Motor Function Classification Scale (GMFCS), and subtype was allocated with the Surveillance of Cerebral Palsy in Europe (SCPE). Measures of refractive error were obtained from all participants and ocular biometry from a subgroup with CP. A significantly higher prevalence and magnitude of refractive error was found in the CP group compared to the control group. Axial length and spherical refractive error were strongly related. This relation did not improve with inclusion of corneal data. There was no relation between the presence or magnitude of spherical refractive errors in CP and the level of motor impairment, intellectual impairment, or the presence of communication difficulties. Higher spherical refractive errors were significantly associated with the nonspastic CP subtype. The presence and magnitude of astigmatism were greater when intellectual impairment was more severe, and astigmatic errors were explained by corneal dimensions. Conclusions. High refractive errors are common in CP, pointing to impairment of the emmetropization process. Biometric data support this In contrast to other functional vision measures, spherical refractive error is unrelated to CP severity, but those with nonspastic CP tend to demonstrate the most extreme errors in refraction.
NASA Technical Reports Server (NTRS)
Carson, William; Lindemuth, Kathleen; Mich, John; White, K. Preston; Parker, Peter A.
2009-01-01
Probabilistic engineering design enhances safety and reduces costs by incorporating risk assessment directly into the design process. In this paper, we assess the format of the quantitative metrics for the vehicle which will replace the Space Shuttle, the Ares I rocket. Specifically, we address the metrics for in-flight measurement error in the vector position of the motor nozzle, dictated by limits on guidance, navigation, and control systems. Analyses include the propagation of error from measured to derived parameters, the time-series of dwell points for the duty cycle during static tests, and commanded versus achieved yaw angle during tests. Based on these analyses, we recommend a probabilistic template for specifying the maximum error in angular displacement and radial offset for the nozzle-position vector. Criteria for evaluating individual tests and risky decisions also are developed.
Combined group ECC protection and subgroup parity protection
Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin
2013-06-18
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.
Nonlinear calibration for petroleum water content measurement using PSO
NASA Astrophysics Data System (ADS)
Li, Mingbao; Zhang, Jiawei
2008-10-01
A new algorithmic for strapdown inertial navigation system (SINS) state estimation based on neural networks is introduced. In training strategy, the error vector and its delay are introduced. This error vector is made of the position and velocity difference between the estimations of system and the outputs of GPS. After state prediction and state update, the states of the system are estimated. After off-line training, the network can approach the status switching of SINS and after on-line training, the state estimate precision can be improved further by reducing network output errors. Then the network convergence is discussed. In the end, several simulations with different noise are given. The results show that the neural network state estimator has lower noise sensitivity and better noise immunity than Kalman filter.
Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David
2012-08-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.
Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David
2012-01-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035
Vector Addition: Effect of the Context and Position of the Vectors
NASA Astrophysics Data System (ADS)
Barniol, Pablo; Zavala, Genaro
2010-10-01
In this article we investigate the effect of: 1) the context, and 2) the position of the vectors, on 2D vector addition tasks. We administered a test to 512 students completing introductory physics courses at a private Mexican university. In the first part, we analyze students' responses in three isomorphic problems: displacements, forces, and no physical context. Students were asked to draw two vectors and the vector sum. We analyzed students' procedures detecting the difficulties when drawing the vector addition and proved that the context matters, not only compared to the context-free case but also between the contexts. In the second part, we analyze students' responses with three different arrangements of the sum of two vectors: tail-to-tail, head-to-tail and separated vectors. We compared the frequencies of the errors in the three different positions to deduce students' conceptions in the addition of vectors.
Li, Wei; Liu, Jian Guo; Zhu, Ning Hua
2015-04-15
We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.
NASA Astrophysics Data System (ADS)
Shastri, Niket; Pathak, Kamlesh
2018-05-01
The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.
Vector space methods of photometric analysis - Applications to O stars and interstellar reddening
NASA Technical Reports Server (NTRS)
Massa, D.; Lillie, C. F.
1978-01-01
A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.
NASA Technical Reports Server (NTRS)
Battin, R. H.; Croopnick, S. R.; Edwards, J. A.
1977-01-01
The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
Singer product apertures-A coded aperture system with a fast decoding algorithm
NASA Astrophysics Data System (ADS)
Byard, Kevin; Shutler, Paul M. E.
2017-06-01
A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.
NASA Astrophysics Data System (ADS)
Zimina, S. V.
2015-06-01
We present the results of statistical analysis of an adaptive antenna array tuned using the least-mean-square error algorithm with quadratic constraint on the useful-signal amplification with allowance for the weight-coefficient fluctuations. Using the perturbation theory, the expressions for the correlation function and power of the output signal of the adaptive antenna array, as well as the formula for the weight-vector covariance matrix are obtained in the first approximation. The fluctuations are shown to lead to the signal distortions at the antenna-array output. The weight-coefficient fluctuations result in the appearance of additional terms in the statistical characteristics of the antenna array. It is also shown that the weight-vector fluctuations are isotropic, i.e., identical in all directions of the weight-coefficient space.
Multiscale vector fields for image pattern recognition
NASA Technical Reports Server (NTRS)
Low, Kah-Chan; Coggins, James M.
1990-01-01
A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-16
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles.
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.
Liu, Zhiyuan; Wang, Changhui
2015-10-23
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.
Adaptive error correction codes for face identification
NASA Astrophysics Data System (ADS)
Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.
2012-06-01
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Kenny S K; Lee, Louis K Y; Xing, L
2015-06-15
Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis
This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.
Cyclotron-based of plant gravisensing
NASA Astrophysics Data System (ADS)
Kordyum, E.; Kalinina, Ia.; Bogatina, N.; Kondrachuk, A.
Roots exhibit positive gravitropism they grow in the direction of a gravitational vector while shoots respond negatively and grow opposite to a gravitational vector We first demonstrated the inversion of roots gravitropism from positive to negative one under gravistimulation in the weak combined magnetic field WCMF consisted of permanent magnetic field PMF with the magnitude of order of 50 mu T and altering magnetic field AMF with the 6 mu T magnitude and a frequency of 32 Hz It was found that the effect of inversion has a resonance nature It means that in the interval of frequencies 1-45 Hz inversion of root gravitropism occurs only at frequency 32 Hz 2-3-day old cress seedlings were gravistimulated in moist chambers which are placed in mu -metal shields Inside mu -metal shields combined magnetic fields have been created The magnitude of magnetic fields was measured by a flux-gate magnetometer Experiments were performed in darkness at temperature 20 pm 1 0 C We measured the divergence angle of a growing root from its horizontal position After 1 h of gravistimulation in the WCMF we observed negative gravitropism of cress roots i e they grow in the opposite direction to a gravitational vector Frequency of 32 Hz for the magnitude of the PMF applied formally corresponds to cyclotron frequency of Ca 2 ions This indicates possible participation of calcium ions in root gravitropism There are many evidences of resonance effects of the WCMF on the biological processes that involve Ca 2 but the nature of
HMI Measured Doppler Velocity Contamination from the SDO Orbit Velocity
NASA Astrophysics Data System (ADS)
Scherrer, Phil; HMI Team
2016-10-01
The Problem: The SDO satellite is in an inclined Geo-sync orbit which allows uninterrupted views of the Sun nearly 98% of the time. This orbit has a velocity of about 3,500 m/s with the solar line-of-sight component varying with time of day and time of year. Due to remaining calibration errors in wavelength filters the orbit velocity leaks into the line-of-sight solar velocity and magnetic field measurements. Since the same model of the filter is used in the Milne-Eddington inversions used to generate the vector magnetic field data, the orbit velocity also contaminates the vector magnetic products. These errors contribute 12h and 24h variations in most HMI data products and are known as the 24-hour problem. Early in the mission we made a patch to the calibration that corrected the disk mean velocity. The resulting LOS velocity has been used for helioseismology with no apparent problems. The velocity signal has about a 1% scale error that varies with time of day and with velocity, i.e. it is non-linear for large velocities. This causes leaks into the LOS field (which is simply the difference between velocity measured in LCP and RCP rescaled for the Zeeman splitting). This poster reviews the measurement process, shows examples of the problem, and describes recent work at resolving the issues. Since the errors are in the filter characterization it makes most sense to work first on the LOS data products since they, unlike the vector products, are directly and simply related to the filter profile without assumptions on the solar atmosphere, filling factors, etc. Therefore this poster is strictly limited to understanding how to better understand the filter profiles as they vary across the field and with time of day and time in years resulting in velocity errors of up to a percent and LOS field estimates with errors up to a few percent (of the standard LOS magnetograph method based on measuring the differences in wavelength of the line centroids in LCP and RCP light). We expect that when better filter profiles are available it will be possible to generate improved vector field data products as well.
Using Redundancy To Reduce Errors in Magnetometer Readings
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.
An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression
Bhatt, Deepak; Aggarwal, Priyanka; Bhattacharya, Prabir; Devabhaktuni, Vijay
2012-01-01
Micro Electro Mechanical System (MEMS)-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN) is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM) based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches. PMID:23012552
Hollow-core screw dislocations in 6H-SiC single crystals: A test of Frank`s theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Si, W.; Dudley, M.; Glass, R.
1997-03-01
Hollow-core screw dislocations, also known as `micropipes`, along the [0001] axis in 6H-SiC single crystals, have been studied by synchrotron white beam x-ray topography (SWBXT), scanning electron microscopy (SEM), and Nomarski optical microscopy (NOM). Using SWBXT, the magnitude of the burgers vector of screw dislocations has been determined by measuring the following four parameters: (1) the diameter of dislocation images in back-reflection topographs; (2) the width of bimodal dislocation images in transmission topographs; (3) the magnitude of the tilt of lattice planes on both sides of dislocation core in projection topographs; and (4) the magnitude of the tilt of latticemore » planes in section topographs. The four methods show good agreement. The burgers vector magnitude of screw dislocations, b, and the diameter of associated micropipes, D, were fitted to Frank`s prediction for hollow-core screw dislocations: D = {mu}b{sup 2}/4{pi}{sup 2}{gamma}, where {mu} is shear modulus, and {gamma} is specific surface energy. 15 refs., 17 figs.« less
A computational intelligence approach to the Mars Precision Landing problem
NASA Astrophysics Data System (ADS)
Birge, Brian Kent, III
Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over the regular PSO algorithm, allowing tracking of nonstationary error functions is detailed. Continued refinement of PSO in the larger research community comes from attempts to understand human-human social interaction as well as analysis of the emergent behavior. Using PSO and the parafoil scenario, optimized reference trajectories are created for an initial condition set of 76 states, representing the convex hull of 2001 states from an early Monte Carlo analysis. The controls are a set series of bank angles followed by a set series of 3DOF thrust vectoring. The reference trajectories are used to train an Artificial Neural Network Reference Trajectory Generator (ANNTraG), with the (marginal) ability to generalize a trajectory from initial conditions it has never been presented. The controls here allow continuous change in bank angle as well as thrust vector. The optimized reference trajectories represent the best achievable trajectory given the initial condition. Steps toward a closed loop neural controller with online learning updates are examined. The inner loop of the simulation employs the Program to Optimize Simulated Trajectories (POST) as the basic model, containing baseline dynamics and state generation. This is controlled from a MATLAB shell that directs the optimization, learning, and control strategy. Using mainly bank angle guidance coupled with CI strategies, the set of achievable reference trajectories are shown to be 88% under 10 meters, a significant improvement in the state of the art. Further, the automatic real-time generation of realistic reference trajectories in the presence of unknown initial conditions is shown to have promise. The closed loop CI guidance strategy is outlined. An unexpected advance came from the effort to optimize the optimization, where the PSO algorithm was improved with the capability for tracking a changing error environment.
Partial pressure analysis in space testing
NASA Technical Reports Server (NTRS)
Tilford, Charles R.
1994-01-01
For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.
Discovering and understanding the vector field using simulation in android app
NASA Astrophysics Data System (ADS)
Budi, A.; Muliyati, D.
2018-05-01
An understanding of vector field’s concepts are fundamental parts of the electrodynamics course. In this paper, we use a simple simulation that can be used to show qualitative imaging results as a variation of the vector field. Android application packages the simulation with consideration of the efficiency of use during the lecture. In addition, this simulation also trying to cover the divergences and curl concepts from the same conditions that students have a complete understanding and can distinguish concepts that have been described only mathematically. This simulation is designed to show the relationship between the field magnitude and its potential. This application can show vector field simulations in various conditions that help to improve students’ understanding of vector field concepts and their relation to particle existence around the field vector.
Benchmarking observational uncertainties for hydrology (Invited)
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.
2013-12-01
There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei
2018-01-01
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751
Research on bearing fault diagnosis of large machinery based on mathematical morphology
NASA Astrophysics Data System (ADS)
Wang, Yu
2018-04-01
To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.
Bias-field equalizer for bubble memories
NASA Technical Reports Server (NTRS)
Keefe, G. E.
1977-01-01
Magnetoresistive Perm-alloy sensor monitors bias field required to maintain bubble memory. Sensor provides error signal that, in turn, corrects magnitude of bias field. Error signal from sensor can be used to control magnitude of bias field in either auxiliary set of bias-field coils around permanent magnet field, or current in small coils used to remagnetize permanent magnet by infrequent, short, high-current pulse or short sequence of pulses.
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming
2016-12-01
An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.
Control method and system for hydraulic machines employing a dynamic joint motion model
Danko, George [Reno, NV
2011-11-22
A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.
Quantitative maps of geomagnetic perturbation vectors during substorm onset and recovery
Pothier, N M; Weimer, D R; Moore, W B
2015-01-01
We have produced the first series of spherical harmonic, numerical maps of the time-dependent surface perturbations in the Earth's magnetic field following the onset of substorms. Data from 124 ground magnetometer stations in the Northern Hemisphere at geomagnetic latitudes above 33° were used. Ground station data averaged over 5 min intervals covering 8 years (1998–2005) were used to construct pseudo auroral upper, auroral lower, and auroral electrojet (AU*, AL*, and AE*) indices. These indices were used to generate a list of substorms that extended from 1998 to 2005, through a combination of automated processing and visual checks. Events were sorted by interplanetary magnetic field (IMF) orientation (at the Advanced Composition Explorer (ACE) satellite), dipole tilt angle, and substorm magnitude. Within each category, the events were aligned on substorm onset. A spherical cap harmonic analysis was used to obtain a least error fit of the substorm disturbance patterns at 5 min intervals up to 90 min after onset. The fits obtained at onset time were subtracted from all subsequent fits, for each group of substorm events. Maps of the three vector components of the averaged magnetic perturbations were constructed to show the effects of substorm currents. These maps are produced for several specific ranges of values for the peak |AL*| index, IMF orientation, and dipole tilt angle. We demonstrate an influence of the dipole tilt angle on the response to substorms. Our results indicate that there are downward currents poleward and upward currents just equatorward of the peak in the substorms' westward electrojet. Key Points Show quantitative maps of ground geomagnetic perturbations due to substorms Three vector components mapped as function of time during onset and recovery Compare/contrast results for different tilt angle and sign of IMF Y-component PMID:26167445
Adib-Moghaddam, Soheil; Soleyman-Jahi, Saeed; Salmanian, Bahram; Omidvari, Amir-Houshang; Adili-Aghdam, Fatemeh; Noorizadeh, Farsad; Eslani, Medi
2016-11-01
To evaluate the long-term quantitative and qualitative optical outcomes of 1-step transepithelial photorefractive keratectomy (PRK) to correct myopia and astigmatism. Bina Eye Hospital, Tehran, Iran. Prospective interventional case series. Eyes with myopia with or without astigmatism were evaluated. One-step transepithelial PRK was performed with an aberration-free aspheric optimized profile and the Amaris 500 laser. Eighteen-month follow-up results for refraction, visual acuities, vector analysis, higher-order aberrations, contrast sensitivity, postoperative pain, and haze grade were assessed. The study enrolled 146 eyes (74 patients). At the end of follow-up, 93.84% of eyes had an uncorrected distance visual acuity of 20/20 or better and 97.94% of eyes were within ±0.5 diopter of the targeted spherical refraction. On vector analysis, the mean correction index value was close to 1 and the mean index of success and magnitude of error values were close to 0. The achieved correction vector was on an axis counterclockwise to the axis of the intended correction. Photopic and mesopic contrast sensitivities and ocular and corneal spherical, cylindrical, and corneal coma aberrations significantly improved (all P < .001). A slight amount of trefoil aberration was induced (P < .001, ocular aberration; P < .01, corneal aberration). No eye lost more than 1 line of corrected distance visual acuity. No eye had a haze grade of 2+ degrees or higher throughout the follow-up. Eighteen-month results indicate the efficacy and safety of transepithelial PRK to correct myopia and astigmatism. It improved refraction and quality of vision. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Risk prediction and aversion by anterior cingulate cortex.
Brown, Joshua W; Braver, Todd S
2007-12-01
The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.
Combined group ECC protection and subgroup parity protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gara, Alan; Cheng, Dong; Heidelberger, Philip
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less
Towards 5G: A Photonic Based Millimeter Wave Signal Generation for Applying in 5G Access Fronthaul.
Alavi, S E; Soltanian, M R K; Amiri, I S; Khalily, M; Supa'at, A S M; Ahmad, H
2016-01-27
5G communications require a multi Gb/s data transmission in its small cells. For this purpose millimeter wave (mm-wave) RF signals are the best solutions to be utilized for high speed data transmission. Generation of these high frequency RF signals is challenging in electrical domain therefore photonic generation of these signals is more studied. In this work, a photonic based simple and robust method for generating millimeter waves applicable in 5G access fronthaul is presented. Besides generating of the mm-wave signal in the 60 GHz frequency band the radio over fiber (RoF) system for transmission of orthogonal frequency division multiplexing (OFDM) with 5 GHz bandwidth is presented. For the purpose of wireless transmission for 5G application the required antenna is designed and developed. The total system performance in one small cell was studied and the error vector magnitude (EVM) of the system was evaluated.
Towards 5G: A Photonic Based Millimeter Wave Signal Generation for Applying in 5G Access Fronthaul
Alavi, S. E.; Soltanian, M. R. K.; Amiri, I. S.; Khalily, M.; Supa’at, A. S. M.; Ahmad, H.
2016-01-01
5G communications require a multi Gb/s data transmission in its small cells. For this purpose millimeter wave (mm-wave) RF signals are the best solutions to be utilized for high speed data transmission. Generation of these high frequency RF signals is challenging in electrical domain therefore photonic generation of these signals is more studied. In this work, a photonic based simple and robust method for generating millimeter waves applicable in 5G access fronthaul is presented. Besides generating of the mm-wave signal in the 60 GHz frequency band the radio over fiber (RoF) system for transmission of orthogonal frequency division multiplexing (OFDM) with 5 GHz bandwidth is presented. For the purpose of wireless transmission for 5G application the required antenna is designed and developed. The total system performance in one small cell was studied and the error vector magnitude (EVM) of the system was evaluated. PMID:26814621
Towards 5G: A Photonic Based Millimeter Wave Signal Generation for Applying in 5G Access Fronthaul
NASA Astrophysics Data System (ADS)
Alavi, S. E.; Soltanian, M. R. K.; Amiri, I. S.; Khalily, M.; Supa'At, A. S. M.; Ahmad, H.
2016-01-01
5G communications require a multi Gb/s data transmission in its small cells. For this purpose millimeter wave (mm-wave) RF signals are the best solutions to be utilized for high speed data transmission. Generation of these high frequency RF signals is challenging in electrical domain therefore photonic generation of these signals is more studied. In this work, a photonic based simple and robust method for generating millimeter waves applicable in 5G access fronthaul is presented. Besides generating of the mm-wave signal in the 60 GHz frequency band the radio over fiber (RoF) system for transmission of orthogonal frequency division multiplexing (OFDM) with 5 GHz bandwidth is presented. For the purpose of wireless transmission for 5G application the required antenna is designed and developed. The total system performance in one small cell was studied and the error vector magnitude (EVM) of the system was evaluated.
NASA Astrophysics Data System (ADS)
Zhu, Ran; Hui, Ming; Shen, Dongya; Zhang, Xiupu
2017-02-01
In this paper, dual wavelength linearization (DWL) technique is studied to suppress odd and even order nonlinearities simultaneously in a Mach-Zehnder modulator (MZM) modulated radio-over-fiber (RoF) transmission system. A theoretical model is given to analyze the DWL employed for MZM. In a single-tone test, the suppressions of the second order harmonic distortion (HD2) and third order harmonic distortion (HD3) at the same time are experimentally verified at different bias voltages of the MZM. The measured spurious-free dynamic ranges (SFDRs) with respect to the HD2 and HD3 are improved simultaneously compared to using a single laser. The output P1 dB is also improved by the DWL technique. Moreover, a WiFi signal is transmitted in the RoF system to test the linearization for broadband signal. The result shows that more than 1 dB improvement of the error vector magnitude (EVM) is obtained by the DWL technique.
Bohata, J; Zvanovec, S; Pesek, P; Korinek, T; Mansour Abadi, M; Ghassemlooy, Z
2016-03-10
This paper describes the experimental verification of the utilization of long-term evolution radio over fiber (RoF) and radio over free space optics (RoFSO) systems using dual-polarization signals for cloud radio access network applications determining the specific utilization limits. A number of free space optics configurations are proposed and investigated under different atmospheric turbulence regimes in order to recommend the best setup configuration. We show that the performance of the proposed link, based on the combination of RoF and RoFSO for 64 QAM at 2.6 GHz, is more affected by the turbulence based on the measured difference error vector magnitude value of 5.5%. It is further demonstrated the proposed systems can offer higher noise immunity under particular scenarios with the signal-to-noise ratio reliability limit of 5 dB in the radio frequency domain for RoF and 19.3 dB in the optical domain for a combination of RoF and RoFSO links.
24-26 GHz radio-over-fiber and free-space optics for fifth-generation systems.
Bohata, Jan; Komanec, Matěj; Spáčil, Jan; Ghassemlooy, Zabih; Zvánovec, Stanislav; Slavík, Radan
2018-03-01
This Letter outlines radio-over-fiber combined with radio-over-free-space optics (RoFSO) and radio frequency free-space transmission, which is of particular relevance for fifth-generation networks. Here, the frequency band of 24-26 GHz is adopted to demonstrate a low-cost, compact, and high-energy-efficient solution based on the direct intensity modulation and direct detection scheme. For our proof-of-concept demonstration, we use 64 quadrature amplitude modulation with a 100 MHz bandwidth. We assess the link performance by exposing the RoFSO section to atmospheric turbulence conditions. Further, we show that the measured minimum error vector magnitude (EVM) is 4.7% and also verify that the proposed system with the free-space-optics link span of 100 m under strong turbulence can deliver an acceptable EVM of <9% with signal-to-noise ratio levels of 22 dB and 10 dB with and without turbulence, respectively.
Brown, James; Carrington, Tucker
2016-10-14
We demonstrate that it is possible to use a variational method to compute 50 vibrational levels of ethylene oxide (a seven-atom molecule) with convergence errors less than 0.01 cm -1 . This is done by beginning with a small basis and expanding it to include product basis functions that are deemed to be important. For ethylene oxide a basis with fewer than 3 × 10 6 functions is large enough. Because the resulting basis has no exploitable structure we use a mapping to evaluate the matrix-vector products required to use an iterative eigensolver. The expanded basis is compared to bases obtained from pre-determined pruning condition. Similar calculations are presented for molecules with 3, 4, 5, and 6 atoms. For the 6-atom molecule, CH 3 CH, the required expanded basis has about 106 000 functions and is about an order of magnitude smaller than bases made with a pre-determined pruning condition.
Feng, Shaoqi; Qin, Chuan; Shang, Kuanping; Pathak, Shibnath; Lai, Weicheng; Guan, Binbin; Clements, Matthew; Su, Tiehui; Liu, Guangyao; Lu, Hongbo; Scott, Ryan P; Ben Yoo, S J
2017-04-17
This paper demonstrates rapidly reconfigurable, high-fidelity optical arbitrary waveform generation (OAWG) in a heterogeneous photonic integrated circuit (PIC). The heterogeneous PIC combines advantages of high-speed indium phosphide (InP) modulators and low-loss, high-contrast silicon nitride (Si3N4) arrayed waveguide gratings (AWGs) so that high-fidelity optical waveform syntheses with rapid waveform updates are possible. The generated optical waveforms spanned a 160 GHz spectral bandwidth starting from an optical frequency comb consisting of eight comb lines separated by 20 GHz channel spacing. The Error Vector Magnitude (EVM) values of the generated waveforms were approximately 16.4%. The OAWG module can rapidly and arbitrarily reconfigure waveforms upon every pulse arriving at 2 ns repetition time. The result of this work indicates the feasibility of truly dynamic optical arbitrary waveform generation where the reconfiguration rate or the modulator bandwidth must exceed the channel spacing of the AWG and the optical frequency comb.
Acoustic intensity calculations for axisymmetrically modeled fluid regions
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.; Everstine, Gordon C.
1992-01-01
An algorithm for calculating acoustic intensities from a time harmonic pressure field in an axisymmetric fluid region is presented. Acoustic pressures are computed in a mesh of NASTRAN triangular finite elements of revolution (TRIAAX) using an analogy between the scalar wave equation and elasticity equations. Acoustic intensities are then calculated from pressures and pressure derivatives taken over the mesh of TRIAAX elements. Intensities are displayed as vectors indicating the directions and magnitudes of energy flow at all mesh points in the acoustic field. A prolate spheroidal shell is modeled with axisymmetric shell elements (CONEAX) and submerged in a fluid region of TRIAAX elements. The model is analyzed to illustrate the acoustic intensity method and the usefulness of energy flow paths in the understanding of the response of fluid-structure interaction problems. The structural-acoustic analogy used is summarized for completeness. This study uncovered a NASTRAN limitation involving numerical precision issues in the CONEAX stiffness calculation causing large errors in the system matrices for nearly cylindrical cones.
NASA Astrophysics Data System (ADS)
Chen, Ming; Peng, Miao; Zhou, Hui; Zheng, Zhiwei; Tang, Xionggui; Maivan, Lap
2017-12-01
To further improve receiver sensitivity of spectrally-efficient guard-band direct-detection optical orthogonal frequency-division multiplexing (OFDM) with twin single-side-band (SSB) modulation technique, an optical IQ modulator (IQM) is employed to optimize optical carrier-to-signal power ratio (CSPR). The CSPRs for the guard-band twin-SSB-OFDM signal generated by using dual-drive Mach-Zehnder modulator (DD-MZM) and optical IQM are theoretically analyzed and supported by simulations. The optimal CSPR for the two types of guard-band twin-SSB-OFDM are identified. The simulations exhibit that the error vector magnitude (EVM) performance of the IQM-enabled guard-band twin-SSB-OFDM is improved by more than 4-dB compared to that of the twin-SSB-OFDM enabled by DD-MZM after 80-km single-mode fiber (SMF) transmission. In addition, more than 3-dB and 10 dB receiver sensitivity improvements in terms of received optical power (ROP) and optical signal-to-noise ratio (OSNR) are also achieved, respectively.
A highly linear power amplifier for WLAN
NASA Astrophysics Data System (ADS)
Jie, Jin; Jia, Shi; Baoli, Ai; Xuguang, Zhang
2016-02-01
A three-stage power amplifier (PA) for WLAN application in 2.4-2.5 GHz is presented. The proposed PA employs an adaptive bias circuit to adjust the operating point of the PA to improve the linearity of the PA. Two methods to short the 2nd harmonic circuit are compared in the area of efficiency and gain of the PA. The PA is taped out in the process of 2 μm InGaP/GaAs HBT and is tested by the evaluation board. The measured results show that 31.5 dB power gain and 29.3 dBm P1dB with an associated 40.4% power added efficiency (PAE) under the single tone stimulus. Up to 26.5 dBm output power can be achieved with an error vector magnitude (EVM) of lower than 3% under the 64QAM/OFDM WLAN stimulus. Project supported by the National Natural Science Foundation of China (No. 61201244) and the Natural Science Fund of SUES (No. E1-0501-14-0168).
Cost-effective bidirectional digitized radio-over-fiber systems employing sigma delta modulation
NASA Astrophysics Data System (ADS)
Lee, Kyung Woon; Jung, HyunDo; Park, Jung Ho
2016-11-01
We propose a cost effective digitized radio-over-fiber (D-RoF) system employing a sigma delta modulation (SDM) and a bidirectional transmission technique using phase modulated downlink and intensity modulated uplink. SDM is transparent to different radio access technologies and modulation formats, and more suitable for a downlink of wireless system because a digital to analog converter (DAC) can be avoided at the base station (BS). Also, Central station and BS share the same light source by using a phase modulation for the downlink and an intensity modulation for the uplink transmission. Avoiding DACs and light sources have advantages in terms of cost reduction, power consumption, and compatibility with conventional wireless network structure. We have designed a cost effective bidirectional D-RoF system using a low pass SDM and measured the downlink and uplink transmission performance in terms of error vector magnitude, signal spectra, and constellations, which are based on the 10MHz LTE 64-QAM standard.
SNARC-like Congruency Based on Number Magnitude and Response Duration
ERIC Educational Resources Information Center
Kiesel, Andrea; Vierck, Esther
2009-01-01
Recent findings demonstrated that number magnitude affects the perception of display time (B. Xuan, D. Zhang, S. He, & X. Chen, 2007). Participants made fewer errors when display time (e.g., short) and magnitude (e.g., small) matched, suggesting an influence of magnitude on time perception. With the present experiment, the authors aimed to extend…
Effect of limbal marking prior to laser ablation on the magnitude of cyclotorsional error.
Chen, Xiangjun; Stojanovic, Aleksandar; Stojanovic, Filip; Eidet, Jon Roger; Raeder, Sten; Øritsland, Haakon; Utheim, Tor Paaske
2012-05-01
To evaluate the residual registration error after limbal-marking-based manual adjustment in cyclotorsional tracker-controlled laser refractive surgery. Two hundred eyes undergoing custom surface ablation with the iVIS Suite (iVIS Technologies) were divided into limbal marked (marked) and non-limbal marked (unmarked) groups. Iris registration information was acquired preoperatively from all eyes. Preoperatively, the horizontal axis was recorded in the marked group for use in manual cyclotorsional alignment prior to surgical iris registration. During iris registration, the preoperative iris information was compared to the eye-tracker captured image. The magnitudes of the registration error angle and cyclotorsional movement during the subsequent laser ablation were recorded and analyzed. Mean magnitude of registration error angle (absolute value) was 1.82°±1.31° (range: 0.00° to 5.50°) and 2.90°±2.40° (range: 0.00° to 13.50°) for the marked and unmarked groups, respectively (P<.001). Mean magnitude of cyclotorsional movement during the laser ablation (absolute value) was 1.15°±1.34° (range: 0.00° to 7.00°) and 0.68°±0.97° (range: 0.00° to 6.00°) for the marked and unmarked groups, respectively (P=.005). Forty-six percent and 60% of eyes had registration error >2°, whereas 22% and 20% of eyes had cyclotorsional movement during ablation >2° in the marked and unmarked groups, respectively. Limbal-marking-based manual alignment prior to laser ablation significantly reduced cyclotorsional registration error. However, residual registration misalignment and cyclotorsional movements remained during ablation. Copyright 2012, SLACK Incorporated.
Competition between learned reward and error outcome predictions in anterior cingulate cortex.
Alexander, William H; Brown, Joshua W
2010-02-15
The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.
Pulse Vector-Excitation Speech Encoder
NASA Technical Reports Server (NTRS)
Davidson, Grant; Gersho, Allen
1989-01-01
Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.
Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data
NASA Technical Reports Server (NTRS)
Voorhies, C. V.; Santana, J.; Sabaka, T.
1999-01-01
Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).
Ishihara, Hisashi; Ota, Nobuyuki; Asada, Minoru
2017-11-27
It is quite difficult for android robots to replicate the numerous and various types of human facial expressions owing to limitations in terms of space, mechanisms, and materials. This situation could be improved with greater knowledge regarding these expressions and their deformation rules, i.e. by using the biomimetic approach. In a previous study, we investigated 16 facial deformation patterns and found that each facial point moves almost only in its own principal direction and different deformation patterns are created with different combinations of moving lengths. However, the replication errors caused by moving each control point of a face in only their principal direction were not evaluated for each deformation pattern at that time. Therefore, we calculated the replication errors in this study using the second principal component scores of the 16 sets of flow vectors at each point on the face. More than 60% of the errors were within 1 mm, and approximately 90% of them were within 3 mm. The average error was 1.1 mm. These results indicate that robots can replicate the 16 investigated facial expressions with errors within 3 mm and 1 mm for about 90% and 60% of the vectors, respectively, even if each point on the robot face moves in only its own principal direction. This finding seems promising for the development of robots capable of showing various facial expressions because significantly fewer types of movements than previously predicted are necessary.
1974-08-01
of the surface irregularities are larce ÖE in comparison to the wavelength > so that E and 5— may be approximated on S by (E) (1 + R E.) S 1...sum vector (Z) and the difference vector (A) at the radar have been determined, the rough boresite error is computed as DELPHI A- I |Z|/|E|27|ä|2
Attitude estimation from magnetometer and earth-albedo-corrected coarse sun sensor measurements
NASA Astrophysics Data System (ADS)
Appel, Pontus
2005-01-01
For full 3-axes attitude determination the magnetic field vector and the Sun vector can be used. A Coarse Sun Sensor consisting of six solar cells placed on each of the six outer surfaces of the satellite is used for Sun vector determination. This robust and low cost setup is sensitive to surrounding light sources as it sees the whole sky. To compensate for the largest error source, the Earth, an albedo model is developed. The total albedo light vector has contributions from the Earth surface which is illuminated by the Sun and visible from the satellite. Depending on the reflectivity of the Earth surface, the satellite's position and the Sun's position the albedo light changes. This cannot be calculated analytically and hence a numerical model is developed. For on-board computer use the Earth albedo model consisting of data tables is transferred into polynomial functions in order to save memory space. For an absolute worst case the attitude determination error can be held below 2∘. In a nominal case it is better than 1∘.
Data entry errors and design for model-based tight glycemic control in critical care.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to achieve consistently. Model-based methods and computerized protocols offer the opportunity to improve TGC quality but require human data entry, particularly of blood glucose (BG) values, which can be significantly prone to error. This study presents the design and optimization of data entry methods to minimize error for a computerized and model-based TGC method prior to pilot clinical trials. To minimize data entry error, two tests were carried out to optimize a method with errors less than the 5%-plus reported in other studies. Four initial methods were tested on 40 subjects in random order, and the best two were tested more rigorously on 34 subjects. The tests measured entry speed and accuracy. Errors were reported as corrected and uncorrected errors, with the sum comprising a total error rate. The first set of tests used randomly selected values, while the second set used the same values for all subjects to allow comparisons across users and direct assessment of the magnitude of errors. These research tests were approved by the University of Canterbury Ethics Committee. The final data entry method tested reduced errors to less than 1-2%, a 60-80% reduction from reported values. The magnitude of errors was clinically significant and was typically by 10.0 mmol/liter or an order of magnitude but only for extreme values of BG < 2.0 mmol/liter or BG > 15.0-20.0 mmol/liter, both of which could be easily corrected with automated checking of extreme values for safety. The data entry method selected significantly reduced data entry errors in the limited design tests presented, and is in use on a clinical pilot TGC study. The overall approach and testing methods are easily performed and generalizable to other applications and protocols. © 2012 Diabetes Technology Society.
Bonilla, Manuel G.; Mark, Robert K.; Lienkaemper, James J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors.The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation in which the variance results primarily from measurement errors.Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are grouped by fault type or by region, including attenuation regions delineated by Evernden and others.Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating Ms with the logarithms of rupture length, fault displacement, or the product of length and displacement.Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of Ms on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
Pointing error analysis of Risley-prism-based beam steering system.
Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng
2014-09-01
Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.
Shimokochi, Yohei; Ambegaonkar, Jatin P.; Meyer, Eric G.
2016-01-01
Context: Ground reaction force (GRF) and tibiofemoral force magnitudes and directions have been shown to affect anterior cruciate ligament loading during landing. However, the kinematic and kinetic factors modifying these 2 forces during landing are unknown. Objective: To clarify the intersegmental kinematic and kinetic links underlying the alteration of the GRF and tibiofemoral force vectors secondary to changes in the sagittal-plane body position during single-legged landing. Design: Crossover study. Setting: Laboratory. Patients or Other Participants: Twenty recreationally active participants (age = 23.4 ± 3.6 years, height = 171.0 ± 9.4 cm, mass = 73.3 ± 12.7 kg). Intervention(s): Participants performed single-legged landings using 3 landing styles: self-selected landing (SSL), body leaning forward and landing on the toes (LFL), and body upright with flat-footed landing (URL). Three-dimensional kinetics and kinematics were recorded. Main Outcome Measure(s): Sagittal-plane tibial inclination and knee-flexion angles, GRF magnitude and inclination angles relative to the tibia, and proximal tibial forces at peak tibial axial forces. Results: The URL resulted in less time to peak tibial axial forces, smaller knee-flexion angles, and greater magnitude and a more anteriorly inclined GRF vector relative to the tibia than did the SSL. These changes led to the greatest peak tibial axial and anterior shear forces in the URL among the 3 landing styles. Conversely, the LFL resulted in longer time to peak tibial axial forces, greater knee-flexion angles, and reduced magnitude and a more posteriorly inclined GRF vector relative to the tibia than the SSL. These changes in LFL resulted in the lowest peak tibial axial and largest posterior shear forces among the 3 landing styles. Conclusions: Sagittal-plane intersegmental kinematic and kinetic links strongly affected the magnitude and direction of GRF and tibiofemoral forces during the impact phase of single-legged landing. Therefore, improving sagittal-plane landing mechanics is important in reducing harmful magnitudes and directions of impact forces on the anterior cruciate ligament. PMID:27723362
Exploratory Calibration of Adjustable-Protrusion Surface-Obstacle (APSO) Skin Friction Vector Gage
NASA Technical Reports Server (NTRS)
Hakkinen, Raimo J.; Neubauer, Jeremy S.; Hamory, Philip J.; Bui, Trong T.; Noffz, Gregory K.; Young, Ron (Technical Monitor)
2003-01-01
The design of an adjustable-protrusion surface-obstacle (APSO) skin friction vector gage is presented. Results from exploratory calibrations conducted in laminar and turbulent boundary layers at the Washington University Low-Speed Wind Tunnel and for turbulent boundary layers at speeds up to Mach 2 on the ceiling of the NASA Glenn Research Center 8- X 6-ft Supersonic Wind Tunnel are also discussed. The adjustable-height gage was designed to yield both the magnitude and direction of the surface shear stress vector and to measure the local static pressure distribution. Results from the NASA test show good correlation for subsonic and low supersonic conditions covering several orders of magnitude in terms of the adopted similarity variables. Recommendations for future work in this area consist of identifying the physical parameters responsible for the disagreement between the university and NASA data sets, developing a compressibility correction specific to the APSO geometry, and examining the effect that static pressure distribution and skewed boundary layers have on the results from the APSO.
Determining on-fault earthquake magnitude distributions from integer programming
NASA Astrophysics Data System (ADS)
Geist, Eric L.; Parsons, Tom
2018-02-01
Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.
An alternative clinical routine for subjective refraction based on power vectors with trial frames.
María Revert, Antonia; Conversa, Maria Amparo; Albarrán Diego, César; Micó, Vicente
2017-01-01
Subjective refraction determines the final point of refractive error assessment in most clinical environments and its foundations have remained unchanged for decades. The purpose of this paper is to compare the results obtained when monocular subjective refraction is assessed in trial frames by a new clinical procedure based on a pure power vector interpretation with conventional clinical refraction procedures. An alternative clinical routine is described that uses power vector interpretation with implementation in trial frames. Refractive error is determined in terms of: (i) the spherical equivalent (M component), and (ii) a pair of Jackson Crossed Cylinder lenses oriented at 0°/90° (J 0 component) and 45°/135° (J 45 component) for determination of astigmatism. This vector subjective refraction result (VR) is compared separately for right and left eyes of 25 subjects (mean age, 35 ± 4 years) against conventional sphero-cylindrical subjective refraction (RX) using a phoropter. The VR procedure was applied with both conventional tumbling E optotypes (VR1) and modified optotypes with oblique orientation (VR2). Bland-Altman plots and intra-class correlation coefficient showed good agreement between VR, and RX (with coefficient values above 0.82) and anova showed no significant differences in any of the power vector components between RX and VR. VR1 and VR2 procedure results were similar (p ≥ 0.77). The proposed routine determines the three components of refractive error in power vector notation [M, J 0 , J 45 ], with a refraction time similar to the one used in conventional subjective procedures. The proposed routine could be helpful for inexperienced clinicians and for experienced clinicians in those cases where it is difficult to get a valid starting point for conventional RX (irregular corneas, media opacities, etc.) and for refractive situations/places with inadequate refractive facilities/equipment. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
Testing hypotheses of earthquake occurrence
NASA Astrophysics Data System (ADS)
Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.
2003-12-01
We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we will estimate by simulations. Each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results would be archived and posted on the RELM web site. The same methods can be applied to any region with adequate monitoring and sufficient earthquakes. If fewer than ten events are forecasted, the likelihood tests may not give definitive results. The tests do force certain requirements on the forecast models. Because the tests are based on absolute rates, stress models must be explicit about how stress increments affect past seismicity rates. Aftershocks of triggered events must be accounted for. Furthermore, the tests are sensitive to magnitude, so forecast models must specify the magnitude distribution of triggered events. Models should account for probable errors in magnitude and location by appropriate smoothing of the probabilities, as the tests will be "cold hearted:" near misses won't count.
Ganesh, Sri; Brar, Sheetal; Pawar, Archana
2017-08-01
To study the safety, efficacy, and outcomes of manual cyclotorsion compensation in small incision lenticule extraction (SMILE) for myopic astigmatism. Eligible patients with myopia from -1.00 to -10.00 diopters (D) spherical equivalent with a minimum astigmatism of 0.75 D undergoing SMILE were included. Intraoperative cyclotorsion compensation was performed by gently rotating the cone and aligning the 0° to 180° limbal marks with the horizontal axis of the reticule of the right eye piece of the microscope of the femtosecond laser after activating the suction. In this study, 81 left eyes from 81 patients were analyzed for vector analysis of astigmatism. The mean cyclotorsion was 5.64° ± 2.55° (range: 2° to 12°). No significant differences were found for surgically induced astigmatism, difference vector, angle of error (AE), correction index, magnitude of error, index of success (IOS), and flattening index between 2 weeks and 3 months postoperatively (P > .05). The eyes were categorized into low (≤ 1.50 D, n = 37) and high (> 1.50 D, n = 44) cylinder groups. At 3 months, intergroup analysis showed a comparable correction index of 0.97 for the low and 0.93 for the high cylinder groups, suggesting a slight undercorrection of 3% and 7%, respectively (P = .14). However, the AE and IOS were significantly lower in the high compared to the low cylinder group (P = .032 and .024 for AE and IOS, respectively), suggesting better alignment of the treatment in the high cylinder group. However, the mean uncorrected distance visual acuity of both groups was comparable (P = .21), suggesting good visual outcomes in the low cylinder group despite a less favorable IOS. Manual compensation may be a safe, feasible, and effective approach to refine the results of astigmatism with SMILE, especially in higher degrees of cylinders. [J Refract Surg. 2017;33(8):506-512.]. Copyright 2017, SLACK Incorporated.
Outcome of corneal and laser astigmatic axis alignment in photoastigmatic refractive keratectomy.
Farah, S G; Olafsson, E; Gwynn, D G; Azar, D T; Brightbill, F S
2000-12-01
To compare the refractive results of laser astigmatic treatment in eyes in which the astigmatic axes of the eye and laser are aligned by limbal marking at the 6 o'clock position and in eyes that are not marked. University Hospital and Clinics, Madison, Wisconsin, USA. This retrospective study comprised 143 eyes that had photoastigmatic refractive keratectomy with the VISX Star excimer laser. The eyes were divided into marked (G1) and unmarked (G2) groups. Based on the preoperative astigmatism, each group was subdivided into low astigmatism (=1.00 diopter [D]) and high astigmatism (>/=1.25 D). Early postoperative manifest refractions (1.0 to 2.5 months) were analyzed. The Alpins vector analysis method was used to calculate the target induced astigmatism, surgically induced astigmatism, difference vector (DV), magnitude of error (ME), angle of error (AE), and index of success (IS). There was no significant difference between the groups in DV, ME, and IS. When the subgroups were analyzed, the DV and ME were comparable; the IS in the G1 high astigmatism subgroup was significantly better than that in the G2 high astigmatism subgroup (0.22 +/- 0.08 and 0.29 +/- 0.04, respectively; P <.0001). There was comparable scatter of AE values; 30% and 36% in G1 and G2, respectively, had an AE of 0. Similar scatter was observed in the subgroups. Of the eyes that had an AE of 0, 90% and 43% in the high astigmatism subgroups of G1 and G2, respectively (P <.05), had full correction of astigmatism. Limbal marking and subsequent eye and laser astigmatic axis alignment improved the refractive outcome of laser astigmatic treatment of >/=1.25 D. A preliminary report of an ongoing prospective randomized study of eyes that had laser in situ keratomileusis is included.
Anketell, Pamela M; Saunders, Kathryn J; Gallagher, Stephen; Bailey, Clare; Little, Julie-Anne
2016-07-01
Autistic Spectrum Disorder (ASD) is a common neurodevelopmental disorder characterised by impairment of communication, social interaction and repetitive behaviours. Only a small number of studies have investigated fundamental clinical measures of vision including refractive error. The aim of this study was to describe the refractive profile of a population of children with ASD compared to typically developing (TD) children. Refractive error was assessed using the Shin-Nippon NVision-K 5001 open-field autorefractor following the instillation of cyclopentolate hydrochloride 1% eye drops. A total of 128 participants with ASD (mean age 10.9 ± 3.3 years) and 206 typically developing participants (11.5 ± 3.1 years) were recruited. There was no significant difference in median refractive error, either by spherical equivalent or most ametropic meridian between the ASD and TD groups (Spherical equivalent, Mann-Whitney U307 = 1.15, p = 0.25; Most Ametropic Meridian, U305 = 0.52, p = 0.60). Median refractive astigmatism was -0.50DC (range 0.00 to -3.50DC) for the ASD group and -0.50DC (Range 0.00 to -2.25DC) for the TD group. Magnitude and prevalence of refractive astigmatism (defined as astigmatism ≥1.00DC) was significantly greater in the ASD group compared to the typically developing group (ASD 26%, TD 8%, magnitude U305 = 3.86, p = 0.0001; prevalence (χ12=17.71 , p < 0.0001). This is the first study to describe the refractive profile of a population of European Caucasian children with ASD compared to a TD population of children. Unlike other neurodevelopmental conditions, there was no increased prevalence of spherical refractive errors in ASD but astigmatic errors were significantly greater in magnitude and prevalence. This highlights the need to examine refractive errors in this population. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
Compliant tactile sensor that delivers a force vector
NASA Technical Reports Server (NTRS)
Torres-Jara, Eduardo (Inventor)
2010-01-01
Tactile Sensor. The sensor includes a compliant convex surface disposed above a sensor array, the sensor array adapted to respond to deformation of the convex surface to generate a signal related to an applied force vector. The applied force vector has three components to establish the direction and magnitude of an applied force. The compliant convex surface defines a dome with a hollow interior and has a linear relation between displacement and load including a magnet disposed substantially at the center of the dome above a sensor array that responds to magnetic field intensity.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
A method of treating the non-grey error in total emittance measurements
NASA Technical Reports Server (NTRS)
Heaney, J. B.; Henninger, J. H.
1971-01-01
In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.
Implementation and Assessment of Advanced Analog Vector-Matrix Processor
NASA Technical Reports Server (NTRS)
Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine
Liu, Zhiyuan; Wang, Changhui
2015-01-01
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method. PMID:26512675
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-01
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles. PMID:28275211
Definition of Contravariant Velocity Components
NASA Technical Reports Server (NTRS)
Hung, Ching-Mao; Kwak, Dochan (Technical Monitor)
2002-01-01
This is an old issue in computational fluid dynamics (CFD). What is the so-called contravariant velocity or contravariant velocity component? In the article, we review the basics of tensor analysis and give the contravariant velocity component a rigorous explanation. For a given coordinate system, there exist two uniquely determined sets of base vector systems - one is the covariant and another is the contravariant base vector system. The two base vector systems are reciprocal. The so-called contravariant velocity component is really the contravariant component of a velocity vector for a time-independent coordinate system, or the contravariant component of a relative velocity between fluid and coordinates, for a time-dependent coordinate system. The contravariant velocity components are not physical quantities of the velocity vector. Their magnitudes, dimensions, and associated directions are controlled by their corresponding covariant base vectors. Several 2-D (two-dimensional) linear examples and 2-D mass-conservation equation are used to illustrate the details of expressing a vector with respect to the covariant and contravariant base vector systems, respectively.
Vector wind profile gust model
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1979-01-01
Work towards establishing a vector wind profile gust model for the Space Transportation System flight operations and trade studies is reported. To date, all the statistical and computational techniques required were established and partially implemented. An analysis of wind profile gust at Cape Kennedy within the theoretical framework is presented. The variability of theoretical and observed gust magnitude with filter type, altitude, and season is described. Various examples are presented which illustrate agreement between theoretical and observed gust percentiles. The preliminary analysis of the gust data indicates a strong variability with altitude, season, and wavelength regime. An extension of the analyses to include conditional distributions of gust magnitude given gust length, distributions of gust modulus, and phase differences between gust components has begun.
Camps; Prevot
1996-08-09
The statistical characteristics of the local magnetic field of Earth during paleosecular variation, excursions, and reversals are described on the basis of a database that gathers the cleaned mean direction and average remanent intensity of 2741 lava flows that have erupted over the last 20 million years. A model consisting of a normally distributed axial dipole component plus an independent isotropic set of vectors with a Maxwellian distribution that simulates secular variation fits the range of geomagnetic fluctuations, in terms of both direction and intensity. This result suggests that the magnitude of secular variation vectors is independent of the magnitude of Earth's axial dipole moment and that the amplitude of secular variation is unchanged during reversals.
Atmospheric refraction effects on baseline error in satellite laser ranging systems
NASA Technical Reports Server (NTRS)
Im, K. E.; Gardner, C. S.
1982-01-01
Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.
An investigation of reports of Controlled Flight Toward Terrain (CFTT)
NASA Technical Reports Server (NTRS)
Porter, R. F.; Loomis, J. P.
1981-01-01
Some 258 reports from more than 23,000 documents in the files of the Aviation Safety Reporting System (ASRS) were found to be to the hazard of flight into terrain with no prior awareness by the crew of impending disaster. Examination of the reports indicate that human error was a casual factor in 64% of the incidents in which some threat of terrain conflict was experienced. Approximately two-thirds of the human errors were attributed to controllers, the most common discrepancy being a radar vector below the Minimum Vector Altitude (MVA). Errors by pilots were of a much diverse nature and include a few instances of gross deviations from their assigned altitudes. The ground proximity warning system and the minimum safe altitude warning equipment were the initial recovery factor in some 18 serious incidents and were apparently the sole warning in six reported instances which otherwise would most probably have ended in disaster.
Modeling and simulation for fewer-axis grinding of complex surface
NASA Astrophysics Data System (ADS)
Li, Zhengjian; Peng, Xiaoqiang; Song, Ci
2017-10-01
As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.
Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines
del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J.; Raboso, Mariano
2015-01-01
Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements. PMID:26091392
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.
del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano
2015-06-17
Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.
Early Error Detection: An Action-Research Experience Teaching Vector Calculus
ERIC Educational Resources Information Center
Añino, María Magdalena; Merino, Gabriela; Miyara, Alberto; Perassi, Marisol; Ravera, Emiliano; Pita, Gustavo; Waigandt, Diana
2014-01-01
This paper describes an action-research experience carried out with second year students at the School of Engineering of the National University of Entre Ríos, Argentina. Vector calculus students played an active role in their own learning process. They were required to present weekly reports, in both oral and written forms, on the topics studied,…
Error assessment of local tie vectors in space geodesy
NASA Astrophysics Data System (ADS)
Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald
2014-05-01
For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.
Quinn, Kylie M.; Costa, Andreia Da; Yamamoto, Ayako; Berry, Dana; Lindsay, Ross W.B.; Darrah, Patricia A.; Wang, Lingshu; Cheng, Cheng; Kong, Wing-Pui; Gall, Jason G.D.; Nicosia, Alfredo; Folgori, Antonella; Colloca, Stefano; Cortese, Riccardo; Gostick, Emma; Price, David A.; Gomez, Carmen E.; Esteban, Mariano; Wyatt, Linda S.; Moss, Bernard; Morgan, Cecilia; Roederer, Mario; Bailer, Robert T.; Nabel, Gary J.; Koup, Richard A.; Seder, Robert A.
2013-01-01
Recombinant adenoviral vectors (rAds) are the most potent recombinant vaccines for eliciting CD8+ T cell-mediated immunity in humans; however, prior exposure from natural adenoviral infection can decrease such responses. Here we show low seroreactivity in humans against simian- (sAd11, sAd16), or chimpanzee-derived (chAd3, chAd63) compared to human-derived (rAd5, rAd28, rAd35) vectors across multiple geographic regions. We then compared the magnitude, quality, phenotype and protective capacity of CD8+ T cell responses in mice vaccinated with rAds encoding SIV Gag. Using a dose range (1 × 107 to 109 PU), we defined a hierarchy among rAd vectors based on the magnitude and protective capacity of CD8+ T cell responses, from most to least as: rAd5 and chAd3, rAd28 and sAd11, chAd63, sAd16, and rAd35. Selection of rAd vector or dose could modulate the proportion and/or frequency of IFNγ+TNFα+IL-2+ and KLRG1+CD127- CD8+ T cells, but strikingly ~30–80% of memory CD8+ T cells co-expressed CD127 and KLRG1. To further optimise CD8+ T cell responses, we assessed rAds as part of prime-boost regimens. Mice primed with rAds and boosted with NYVAC generated Gag-specific responses that approached ~60% of total CD8+ T cells at peak. Alternatively, priming with DNA or rAd28 and boosting with rAd5 or chAd3 induced robust and equivalent CD8+ T cell responses compared to prime or boost alone. Collectively, these data provide the immunologic basis for using specific rAd vectors alone or as part of prime-boost regimens to induce CD8+ T cells for rapid effector function or robust long-term memory, respectively. PMID:23390298
Nogales-Bueno, Julio; Ayala, Fernando; Hernández-Hierro, José Miguel; Rodríguez-Pulido, Francisco José; Echávarri, José Federico; Heredia, Francisco José
2015-05-06
Characteristic vector analysis has been applied to near-infrared spectra to extract the main spectral information from hyperspectral images. For this purpose, 3, 6, 9, and 12 characteristic vectors have been used to reconstruct the spectra, and root-mean-square errors (RMSEs) have been calculated to measure the differences between characteristic vector reconstructed spectra (CVRS) and hyperspectral imaging spectra (HIS). RMSE values obtained were 0.0049, 0.0018, 0.0012, and 0.0012 [log(1/R) units] for spectra allocated into the validation set, for 3, 6, 9, and 12 characteristic vectors, respectively. After that, calibration models have been developed and validated using the different groups of CVRS to predict skin total phenolic concentration, sugar concentration, titratable acidity, and pH by modified partial least-squares (MPLS) regression. The obtained results have been compared to those previously obtained from HIS. The models developed from the CVRS reconstructed from 12 characteristic vectors present similar values of coefficients of determination (RSQ) and standard errors of prediction (SEP) than the models developed from HIS. RSQ and SEP were 0.84 and 1.13 mg g(-1) of skin grape (expressed as gallic acid equivalents), 0.93 and 2.26 °Brix, 0.97 and 3.87 g L(-1) (expressed as tartaric acid equivalents), and 0.91 and 0.14 for skin total phenolic concentration, sugar concentration, titratable acidity, and pH, respectively, for the models developed from the CVRS reconstructed from 12 characteristic vectors.
A predictability study of Lorenz's 28-variable model as a dynamical system
NASA Technical Reports Server (NTRS)
Krishnamurthy, V.
1993-01-01
The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.
Zumsteg, Zachary; DeMarco, John; Lee, Steve P; Steinberg, Michael L; Lin, Chun Shu; McBride, William; Lin, Kevin; Wang, Pin-Chieh; Kupelian, Patrick; Lee, Percy
2012-06-01
On-board cone-beam computed tomography (CBCT) is currently available for alignment of patients with head-and-neck cancer before radiotherapy. However, daily CBCT is time intensive and increases the overall radiation dose. We assessed the feasibility of using the average couch shifts from the first several CBCTs to estimate and correct for the presumed systematic setup error. 56 patients with head-and-neck cancer who received daily CBCT before intensity-modulated radiation therapy had recorded shift values in the medial-lateral, superior-inferior, and anterior-posterior dimensions. The average displacements in each direction were calculated for each patient based on the first five or 10 CBCT shifts and were presumed to represent the systematic setup error. The residual error after this correction was determined by subtracting the calculated shifts from the shifts obtained using daily CBCT. The magnitude of the average daily residual three-dimensional (3D) error was 4.8 ± 1.4 mm, 3.9 ± 1.3 mm, and 3.7 ± 1.1 mm for uncorrected, five CBCT corrected, and 10 CBCT corrected protocols, respectively. With no image guidance, 40.8% of fractions would have been >5 mm off target. Using the first five CBCT shifts to correct subsequent fractions, this percentage decreased to 19.0% of all fractions delivered and decreased the percentage of patients with average daily 3D errors >5 mm from 35.7% to 14.3% vs. no image guidance. Using an average of the first 10 CBCT shifts did not significantly improve this outcome. Using the first five CBCT shift measurements as an estimation of the systematic setup error improves daily setup accuracy for a subset of patients with head-and-neck cancer receiving intensity-modulated radiation therapy and primarily benefited those with large 3D correction vectors (>5 mm). Daily CBCT is still necessary until methods are developed that more accurately determine which patients may benefit from alternative imaging strategies. Copyright © 2012 Elsevier Inc. All rights reserved.
Vectorized program architectures for supercomputer-aided circuit design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzoli, V.; Ferlito, M.; Neri, A.
1986-01-01
Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less
Demonstrating the Direction of Angular Velocity in Circular Motion
NASA Astrophysics Data System (ADS)
Demircioglu, Salih; Yurumezoglu, Kemal; Isik, Hakan
2015-09-01
Rotational motion is ubiquitous in nature, from astronomical systems to household devices in everyday life to elementary models of atoms. Unlike the tangential velocity vector that represents the instantaneous linear velocity (magnitude and direction), an angular velocity vector is conceptually more challenging for students to grasp. In physics classrooms, the direction of an angular velocity vector is taught by the right-hand rule, a mnemonic tool intended to aid memory. A setup constructed for instructional purposes may provide students with a more easily understood and concrete method to observe the direction of the angular velocity. This article attempts to demonstrate the angular velocity vector using the observable motion of a screw mounted to a remotely operated toy car.
Satellite to study earth's magnetic field
NASA Technical Reports Server (NTRS)
1979-01-01
The Magnetic Field Satellite (Magsat) designed to measure the near earth magnetic field and crustal anomalies is briefly described. A scalar magnetometer to measure the magnitude of the earth's crustal magnetic field and a vector magnetometer to measure magnetic field direction as well as magnitude are included. The mission and its objectives are summarized along with the data collection and processing system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, John
This project aims to understand the characteristics of the free-field strong-motion records that have yielded the 100 largest peak accelerations and the 100 largest peak velocities recorded to date. The peak is defined as the maximum magnitude of the acceleration or velocity vector during the strong shaking. This compilation includes 35 records with peak acceleration greater than gravity, and 41 records with peak velocities greater than 100 cm/s. The results represent an estimated 150,000 instrument-years of strong-motion recordings. The mean horizontal acceleration or velocity, as used for the NGA ground motion models, is typically 0.76 times the magnitude of thismore » vector peak. Accelerations in the top 100 come from earthquakes as small as magnitude 5, while velocities in the top 100 all come from earthquakes with magnitude 6 or larger. Records are dominated by crustal earthquakes with thrust, oblique-thrust, or strike-slip mechanisms. Normal faulting mechanisms in crustal earthquakes constitute under 5% of the records in the databases searched, and an even smaller percentage of the exceptional records. All NEHRP site categories have contributed exceptional records, in proportions similar to the extent that they are represented in the larger database.« less
An efficient calibration method for SQUID measurement system using three orthogonal Helmholtz coils
NASA Astrophysics Data System (ADS)
Hua, Li; Shu-Lin, Zhang; Chao-Xiang, Zhang; Xiang-Yan, Kong; Xiao-Ming, Xie
2016-06-01
For a practical superconducting quantum interference device (SQUID) based measurement system, the Tesla/volt coefficient must be accurately calibrated. In this paper, we propose a highly efficient method of calibrating a SQUID magnetometer system using three orthogonal Helmholtz coils. The Tesla/volt coefficient is regarded as the magnitude of a vector pointing to the normal direction of the pickup coil. By applying magnetic fields through a three-dimensional Helmholtz coil, the Tesla/volt coefficient can be directly calculated from magnetometer responses to the three orthogonally applied magnetic fields. Calibration with alternating current (AC) field is normally used for better signal-to-noise ratio in noisy urban environments and the results are compared with the direct current (DC) calibration to avoid possible effects due to eddy current. In our experiment, a calibration relative error of about 6.89 × 10-4 is obtained, and the error is mainly caused by the non-orthogonality of three axes of the Helmholtz coils. The method does not need precise alignment of the magnetometer inside the Helmholtz coil. It can be used for the multichannel magnetometer system calibration effectively and accurately. Project supported by the “Strategic Priority Research Program (B)” of the Chinese Academy of Sciences (Grant No. XDB04020200) and the Shanghai Municipal Science and Technology Commission Project, China (Grant No. 15DZ1940902).
A Fully Integrated Sensor SoC with Digital Calibration Hardware and Wireless Transceiver at 2.4 GHz
Kim, Dong-Sun; Jang, Sung-Joon; Hwang, Tae-Ho
2013-01-01
A single-chip sensor system-on-a-chip (SoC) that implements radio for 2.4 GHz, complete digital baseband physical layer (PHY), 10-bit sigma-delta analog-to-digital converter and dedicated sensor calibration hardware for industrial sensing systems has been proposed and integrated in a 0.18-μm CMOS technology. The transceiver's building block includes a low-noise amplifier, mixer, channel filter, receiver signal-strength indicator, frequency synthesizer, voltage-controlled oscillator, and power amplifier. In addition, the digital building block consists of offset quadrature phase-shift keying (OQPSK) modulation, demodulation, carrier frequency offset compensation, auto-gain control, digital MAC function, sensor calibration hardware and embedded 8-bit microcontroller. The digital MAC function supports cyclic redundancy check (CRC), inter-symbol timing check, MAC frame control, and automatic retransmission. The embedded sensor signal processing block consists of calibration coefficient calculator, sensing data calibration mapper and sigma-delta analog-to-digital converter with digital decimation filter. The sensitivity of the overall receiver and the error vector magnitude (EVM) of the overall transmitter are −99 dBm and 18.14%, respectively. The proposed calibration scheme has a reduction of errors by about 45.4% compared with the improved progressive polynomial calibration (PPC) method and the maximum current consumption of the SoC is 16 mA. PMID:23698271
Object detection in natural backgrounds predicted by discrimination performance and models
NASA Technical Reports Server (NTRS)
Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.
1997-01-01
Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.
Waldhauser, F.; Ellsworth, W.L.
2000-01-01
We have developed an efficient method to determine high-resolution hypocenter locations over large distances. The location method incorporates ordinary absolute travel-time measurements and/or cross-correlation P-and S-wave differential travel-time measurements. Residuals between observed and theoretical travel-time differences (or double-differences) are minimized for pairs of earthquakes at each station while linking together all observed event-station pairs. A least-squares solution is found by iteratively adjusting the vector difference between hypocentral pairs. The double-difference algorithm minimizes errors due to unmodeled velocity structure without the use of station corrections. Because catalog and cross-correlation data are combined into one system of equations, interevent distances within multiplets are determined to the accuracy of the cross-correlation data, while the relative locations between multiplets and uncorrelated events are simultaneously determined to the accuracy of the absolute travel-time data. Statistical resampling methods are used to estimate data accuracy and location errors. Uncertainties in double-difference locations are improved by more than an order of magnitude compared to catalog locations. The algorithm is tested, and its performance is demonstrated on two clusters of earthquakes located on the northern Hayward fault, California. There it colapses the diffuse catalog locations into sharp images of seismicity and reveals horizontal lineations of hypocenter that define the narrow regions on the fault where stress is released by brittle failure.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.
Bishop, Peter J; Clemente, Christofer J; Hocknull, Scott A; Barrett, Rod S; Lloyd, David G
2017-03-01
Cancellous bone is very sensitive to its prevailing mechanical environment, and study of its architecture has previously aided interpretations of locomotor biomechanics in extinct animals or archaeological populations. However, quantification of architectural features may be compromised by poor preservation in fossil and archaeological specimens, such as post mortem cracking or fracturing. In this study, the effects of post mortem cracks on the quantification of cancellous bone fabric were investigated through the simulation of cracks in otherwise undamaged modern bone samples. The effect on both scalar (degree of fabric anisotropy, fabric elongation index) and vector (principal fabric directions) variables was assessed through comparing the results of architectural analyses of cracked vs. non-cracked samples. Error was found to decrease as the relative size of the crack decreased, and as the orientation of the crack approached the orientation of the primary fabric direction. However, even in the best-case scenario simulated, error remained substantial, with at least 18% of simulations showing a > 10% error when scalar variables were considered, and at least 6.7% of simulations showing a > 10° error when vector variables were considered. As a 10% (scalar) or 10° (vector) difference is probably too large for reliable interpretation of a fossil or archaeological specimen, these results suggest that cracks should be avoided if possible when analysing cancellous bone architecture in such specimens. © 2016 Anatomical Society.
CCD image sensor induced error in PIV applications
NASA Astrophysics Data System (ADS)
Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.
2014-06-01
The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Random vectors and spatial analysis by geostatistics for geotechnical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, D.S.
1987-08-01
Geostatistics is extended to the spatial analysis of vector variables by defining the estimation variance and vector variogram in terms of the magnitude of difference vectors. Many random variables in geotechnology are in vectorial terms rather than scalars, and its structural analysis requires those sample variable interpolations to construct and characterize structural models. A better local estimator will result in greater quality of input models; geostatistics can provide such estimators; kriging estimators. The efficiency of geostatistics for vector variables is demonstrated in a case study of rock joint orientations in geological formations. The positive cross-validation encourages application of geostatistics tomore » spatial analysis of random vectors in geoscience as well as various geotechnical fields including optimum site characterization, rock mechanics for mining and civil structures, cavability analysis of block cavings, petroleum engineering, and hydrologic and hydraulic modelings.« less
The geographical vector in distribution of genetic diversity for Clonorchis sinensis.
Solodovnik, Daria A; Tatonova, Yulia V; Burkovskaya, Polina V
2018-01-01
Clonorchis sinensis, the causative agent of clonorchiasis, is one of the most important parasites that inhabit countries of East and Southeast Asia. In this study, we validated the existence of a geographical vector for C. sinensis using the partial cox1 mtDNA gene, which includes a conserved region. The samples of parasite were divided into groups corresponding to three river basins, and the size of the conserved region had a strong tendency to increase from the northernmost to the southernmost samples. This indicates the availability of the geographical vector in distribution of genetic diversity. A vector is a quantity that is characterized by magnitude and direction. Geographical vector obtained in cox1 gene of C. sinensis has both these features. The reasons for the occurrence of this feature, including the influence of intermediate and definitive hosts on vector formation, and the possibility of its use for clonorchiasis monitoring are discussed. Graphical abstract ᅟ.
ERRATUM: "Reliability of the Detection of the Baryon Acoustic Peak" (2009, ApJ, 696, L93)
NASA Astrophysics Data System (ADS)
Martínez, Vicent J.; Arnalte-Mur, Pablo; Saar, Enn; de la Cruz, Pablo; Jesús Pons-Bordería, María; Paredes, Silvestre; Fernández-Soto, Alberto; Tempel, Elmo
2009-10-01
Due to an error in applying the passive evolution to transform Mg (z = 0) magnitudes to Mg (z = 0.3), the values of the magnitude limits for the samples DR7-LRG and DR7-LRG-VL quoted in Table 1 were not correct. The corrected Table 1 is appended below. Note that although the redshift limits of the sample DR7-LRG are the same as in Eisenstein et al. (2005), the magnitude limits are therefore slightly shifted (see Table 1). Once this fact is considered, figures and results are completely unaffected. We are very grateful to Eyal Kazin for pointing out the error.
NASA Astrophysics Data System (ADS)
Odinokov, S. B.; Petrov, A. V.
1995-10-01
Mathematical models of components of a vector-matrix optoelectronic multiplier are considered. Perturbing factors influencing a real optoelectronic system — noise and errors of radiation sources and detectors, nonlinearity of an analogue—digital converter, nonideal optical systems — are taken into account. Analytic expressions are obtained for relating the precision of such a multiplier to the probability of an error amounting to one bit, to the parameters describing the quality of the multiplier components, and to the quality of the optical system of the processor. Various methods of increasing the dynamic range of a multiplier are considered at the technical systems level.
Study on the precision of the guide control system of independent wheel
NASA Astrophysics Data System (ADS)
ji, Y.; Ren, L.; Li, R.; Sun, W.
2016-09-01
The torque ripple of permanent magnet synchronous motor vector with active control is studied in this paper. The ripple appears because of the impact of position detection and current detection, the error generated in inverter and the influence of motor ontology (magnetic chain harmonic and the cogging effect and so on). Then, the simulation dynamic model of bogie with permanent magnet synchronous motor vector control system is established with MATLAB/Simulink. The stability of bogie with steering control is studied. The relationship between the error of the motor and the precision of the control system is studied. The result shows that the existing motor does not meet the requirements of the control system.
Early error detection: an action-research experience teaching vector calculus
NASA Astrophysics Data System (ADS)
Magdalena Añino, María; Merino, Gabriela; Miyara, Alberto; Perassi, Marisol; Ravera, Emiliano; Pita, Gustavo; Waigandt, Diana
2014-04-01
This paper describes an action-research experience carried out with second year students at the School of Engineering of the National University of Entre Ríos, Argentina. Vector calculus students played an active role in their own learning process. They were required to present weekly reports, in both oral and written forms, on the topics studied, instead of merely sitting and watching as the teacher solved problems on the blackboard. The students were also asked to perform computer assignments, and their learning process was continuously monitored. Among many benefits, this methodology has allowed students and teachers to identify errors and misconceptions that might have gone unnoticed under a more passive approach.
A fingerprint key binding algorithm based on vector quantization and error correction
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Qian; Lv, Ke; He, Ning
2012-04-01
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A
2014-06-15
Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less
Long-term follow-up of astigmatic keratotomy for corneal astigmatism after penetrating keratoplasty.
Böhringer, Daniel; Dineva, Nina; Maier, Philip; Birnbaum, Florian; Kirschkamp, Thomas; Reinhard, Thomas; Eberwein, Philipp
2016-11-01
To report the long-term stability of paired arcuate corneal keratotomies (AKs) in patients with high regular postpenetrating keratoplasty astigmatism. Retrospective chart review of best-corrected visual acuity, refraction and keratometric values of 41 eyes with AK between 2003 and 2012. Magnitude of median target induced astigmatism vector was 9.2 dioptres (Dpt). We reached a median magnitude of surgically induced astigmatism vector of 9.81 Dpt and a median magnitude of difference vector of 5.5 Dpt. In keratometry, we achieved a net median astigmatism reduction by 3.3 Dpt. The average correction index was 1.14, showing a slight overcorrection. Irregularity of keratometric astigmatism increased by 0.6 Dpt, and spherical equivalent changed by 1.75 Dpt. Monocular best spectacle corrected visual acuity increased from preoperatively 20/63 (0.5 logMAR) to 20/40 (0.3 logMAR) postoperatively. Median gain on the ETDRS chart was two lines. Long-term follow-up showed a median keratometric astigmatic increase by 0.3 Dpt per year. Arcuate corneal keratotomies is a safe and effective method to reduce high regular corneal astigmatism following penetrating keratoplasty but has limited predictability. The long-term follow-up shows an increase of keratometric astigmatism by 0.3 Dpt/year, equalizing the surgical effect after 10 years. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Methods for estimating magnitude and frequency of peak flows for natural streams in Utah
Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.
2007-01-01
Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.
Attitude determination using vector observations: A fast optimal matrix algorithm
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1993-01-01
The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.
Maaoui-Ben Hassine, Ikram; Naouar, Mohamed Wissem; Mrabet-Bellaaj, Najiba
2016-05-01
In this paper, Model Predictive Control and Dead-beat predictive control strategies are proposed for the control of a PMSG based wind energy system. The proposed MPC considers the model of the converter-based system to forecast the possible future behavior of the controlled variables. It allows selecting the voltage vector to be applied that leads to a minimum error by minimizing a predefined cost function. The main features of the MPC are low current THD and robustness against parameters variations. The Dead-beat predictive control is based on the system model to compute the optimum voltage vector that ensures zero-steady state error. The optimum voltage vector is then applied through Space Vector Modulation (SVM) technique. The main advantages of the Dead-beat predictive control are low current THD and constant switching frequency. The proposed control techniques are presented and detailed for the control of back-to-back converter in a wind turbine system based on PMSG. Simulation results (under Matlab-Simulink software environment tool) and experimental results (under developed prototyping platform) are presented in order to show the performances of the considered control strategies. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Implementation of a Personal Computer Based Parameter Estimation Program
1992-03-01
if necessary and identify by biock nunrbet) FEILD GROUP SUBGROUP Il’arunietar uetinkatlUln 19 ABSTRACT (continue on reverse it necessary and identity...model constant ix L,M,N X,Y,Z moment components Lp: •sbc.’.• T’ = sb C . r, - 2 V C, , L, = _sb 2 C 2V C L8,=qsbC 1 , Lw Scale of the turbulence M Vector ...u,v,w X,Y,Z velocity components V Vector velocity V Magnitude of velocity vector w9 Z velocity due to gust X.. x-distance to normal acclerometer X.P x
Determining on-fault earthquake magnitude distributions from integer programming
Geist, Eric L.; Parsons, Thomas E.
2018-01-01
Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauk, F.J.; Christensen, D.H.
1980-09-01
Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less
Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump
NASA Astrophysics Data System (ADS)
Gontcharov, G. A.
2017-08-01
Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.
Method and apparatus for in-situ detection and isolation of aircraft engine faults
NASA Technical Reports Server (NTRS)
Bonanni, Pierino Gianni (Inventor); Brunell, Brent Jerome (Inventor)
2007-01-01
A method for performing a fault estimation based on residuals of detected signals includes determining an operating regime based on a plurality of parameters, extracting predetermined noise standard deviations of the residuals corresponding to the operating regime and scaling the residuals, calculating a magnitude of a measurement vector of the scaled residuals and comparing the magnitude to a decision threshold value, extracting an average, or mean direction and a fault level mapping for each of a plurality of fault types, based on the operating regime, calculating a projection of the measurement vector onto the average direction of each of the plurality of fault types, determining a fault type based on which projection is maximum, and mapping the projection to a continuous-valued fault level using a lookup table.
Efficient boundary hunting via vector quantization
NASA Astrophysics Data System (ADS)
Diamantini, Claudia; Panti, Maurizio
2001-03-01
A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.
Development of a two-dimensional dual pendulum thrust stand for Hall thrusters.
Nagao, N; Yokota, S; Komurasaki, K; Arakawa, Y
2007-11-01
A two-dimensional dual pendulum thrust stand was developed to measure thrust vectors [axial and horizontal (transverse) direction thrusts] of a Hall thruster. A thruster with a steering mechanism is mounted on the inner pendulum, and thrust is measured from the displacement between inner and outer pendulums, by which a thermal drift effect is canceled out. Two crossover knife-edges support each pendulum arm: one is set on the other at a right angle. They enable the pendulums to swing in two directions. Thrust calibration using a pulley and weight system showed that the measurement errors were less than 0.25 mN (1.4%) in the main thrust direction and 0.09 mN (1.4%) in its transverse direction. The thrust angle of the thrust vector was measured with the stand using the thruster. Consequently, a vector deviation from the main thrust direction of +/-2.3 degrees was measured with the error of +/-0.2 degrees under the typical operating conditions for the thruster.
Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter
NASA Astrophysics Data System (ADS)
Imig, Astrid; Stephenson, Edward
2009-10-01
The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.
Force estimation from OCT volumes using 3D CNNs.
Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander
2018-07-01
Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.
Background: Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typ...
Results from a Sting Whip Correction Verification Test at the Langley 16-Foot Transonic Tunnel
NASA Technical Reports Server (NTRS)
Crawford, B. L.; Finley, T. D.
2002-01-01
In recent years, great strides have been made toward correcting the largest error in inertial Angle of Attack (AoA) measurements in wind tunnel models. This error source is commonly referred to as 'sting whip' and is caused by aerodynamically induced forces imparting dynamics on sting-mounted models. These aerodynamic forces cause the model to whip through an arc section in the pitch and/or yaw planes, thus generating a centrifugal acceleration and creating a bias error in the AoA measurement. It has been shown that, under certain conditions, this induced AoA error can be greater than one third of a degree. An error of this magnitude far exceeds the target AoA goal of 0.01 deg established at NASA Langley Research Center (LaRC) and elsewhere. New sting whip correction techniques being developed at LaRC are able to measure and reduce this sting whip error by an order of magnitude. With this increase of accuracy, the 0.01 deg AoA target is achievable under all but the most severe conditions.
On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology
NASA Astrophysics Data System (ADS)
Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-08-01
We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.
Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie
2017-01-01
Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of the dynamic interplay between reward, dopamine, and associative memory formation. Our results also underline the importance of considering individual traits when assessing reward-related influences on memory.
Applying integrals of motion to the numerical solution of differential equations
NASA Technical Reports Server (NTRS)
Vezewski, D. J.
1980-01-01
A method is developed for using the integrals of systems of nonlinear, ordinary, differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scalar or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
Applying integrals of motion to the numerical solution of differential equations
NASA Technical Reports Server (NTRS)
Jezewski, D. J.
1979-01-01
A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.
NASA Astrophysics Data System (ADS)
Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.
2017-10-01
We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.
Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method
Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.
2012-01-01
Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978
NASA Astrophysics Data System (ADS)
Ibrahim, Ichsan; Malasan, Hakim L.; Kunjaya, Chatief; Timur Jaelani, Anton; Puannandra Putri, Gerhana; Djamal, Mitra
2018-04-01
In astronomy, the brightness of a source is typically expressed in terms of magnitude. Conventionally, the magnitude is defined by the logarithm of received flux. This relationship is known as the Pogson formula. For received flux with a small signal to noise ratio (S/N), however, the formula gives a large magnitude error. We investigate whether the use of Inverse Hyperbolic Sine function (hereafter referred to as the Asinh magnitude) in the modified formulae could allow for an alternative calculation of magnitudes for small S/N flux, and whether the new approach is better for representing the brightness of that region. We study the possibility of increasing the detection level of gravitational microlensing using 40 selected microlensing light curves from the 2013 and 2014 seasons and by using the Asinh magnitude. Photometric data of the selected events are obtained from the Optical Gravitational Lensing Experiment (OGLE). We found that utilization of the Asinh magnitude makes the events brighter compared to using the logarithmic magnitude, with an average of about 3.42 × 10‑2 magnitude and an average in the difference of error between the logarithmic and the Asinh magnitude of about 2.21 × 10‑2 magnitude. The microlensing events OB140847 and OB140885 are found to have the largest difference values among the selected events. Using a Gaussian fit to find the peak for OB140847 and OB140885, we conclude statistically that the Asinh magnitude gives better mean squared values of the regression and narrower residual histograms than the Pogson magnitude. Based on these results, we also attempt to propose a limit in magnitude value for which use of the Asinh magnitude is optimal with small S/N data.
Dealing with Big Numbers: Representation and Understanding of Magnitudes outside of Human Experience
ERIC Educational Resources Information Center
Resnick, Ilyse; Newcombe, Nora S.; Shipley, Thomas F.
2017-01-01
Being able to estimate quantity is important in everyday life and for success in the STEM disciplines. However, people have difficulty reasoning about magnitudes outside of human perception (e.g., nanoseconds, geologic time). This study examines patterns of estimation errors across temporal and spatial magnitudes at large scales. We evaluated the…
Retrovirus-based vectors for transient and permanent cell modification.
Schott, Juliane W; Hoffmann, Dirk; Schambach, Axel
2015-10-01
Retroviral vectors are commonly employed for long-term transgene expression via integrating vector technology. However, three alternative retrovirus-based platforms are currently available that allow transient cell modification. Gene expression can be mediated from either episomal DNA or RNA templates, or selected proteins can be directly transferred through retroviral nanoparticles. The different technologies are functionally graded with respect to safety, expression magnitude and expression duration. Improvement of the initial technologies, including modification of vector designs, targeted increase in expression strength and duration as well as improved safety characteristics, has allowed maturation of retroviral systems into efficient and promising tools that meet the technological demands of a wide variety of potential application areas. Copyright © 2015 Elsevier Ltd. All rights reserved.
Acetaminophen attenuates error evaluation in cortex
Kam, Julia W.Y.; Heine, Steven J.; Inzlicht, Michael; Handy, Todd C.
2016-01-01
Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants’ ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual’s Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. PMID:26892161
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
NASA Astrophysics Data System (ADS)
Dzuba, V. A.; Flambaum, V. V.; Stadnik, Y. V.
2017-12-01
In the presence of P -violating interactions, the exchange of vector bosons between electrons and nucleons induces parity-nonconserving (PNC) effects in atoms and molecules, while the exchange of vector bosons between nucleons induces anapole moments of nuclei. We perform calculations of such vector-mediated PNC effects in Cs, Ba+ , Yb, Tl, Fr, and Ra+ using the same relativistic many-body approaches as in earlier calculations of standard-model PNC effects, but with the long-range operator of the weak interaction. We calculate nuclear anapole moments due to vector-boson exchange using a simple nuclear model. From measured and predicted (within the standard model) values for the PNC amplitudes in Cs, Yb, and Tl, as well as the nuclear anapole moment of 133Cs, we constrain the P -violating vector-pseudovector nucleon-electron and nucleon-proton interactions mediated by a generic vector boson of arbitrary mass. Our limits improve on existing bounds from other experiments by many orders of magnitude over a very large range of vector-boson masses.
450-nm GaN laser diode enables high-speed visible light communication with 9-Gbps QAM-OFDM.
Chi, Yu-Chieh; Hsieh, Dan-Hua; Tsai, Cheng-Ting; Chen, Hsiang-Yu; Kuo, Hao-Chung; Lin, Gong-Ru
2015-05-18
A TO-38-can packaged Gallium nitride (GaN) blue laser diode (LD) based free-space visible light communication (VLC) with 64-quadrature amplitude modulation (QAM) and 32-subcarrier orthogonal frequency division multiplexing (OFDM) transmission at 9 Gbps is preliminarily demonstrated over a 5-m free-space link. The 3-dB analog modulation bandwidth of the TO-38-can packaged GaN blue LD biased at 65 mA and controlled at 25°C is only 900 MHz, which can be extended to 1.5 GHz for OFDM encoding after throughput intensity optimization. When delivering the 4-Gbps 16-QAM OFDM data within 1-GHz bandwidth, the error vector magnitude (EVM), signal-to-noise ratio (SNR) and bit-error-rate (BER) of the received data are observed as 8.4%, 22.4 dB and 3.5 × 10(-8), respectively. By increasing the encoded bandwidth to 1.5 GHz, the TO-38-can packaged GaN blue LD enlarges its transmission capacity to 6 Gbps but degrades its transmitted BER to 1.7 × 10(-3). The same transmission capacity of 6 Gbps can also be achieved with a BER of 1 × 10(-6) by encoding 64-QAM OFDM data within 1-GHz bandwidth. Using the 1.5-GHz full bandwidth of the TO-38-can packaged GaN blue LD provides the 64-QAM OFDM transmission up to 9 Gbps, which successfully delivers data with an EVM of 5.1%, an SNR of 22 dB and a BER of 3.6 × 10(-3) passed the forward error correction (FEC) criterion.
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2016-02-01
A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.
Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong
2017-07-01
Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.
Can different quantum state vectors correspond to the same physical state? An experimental test
NASA Astrophysics Data System (ADS)
Nigg, Daniel; Monz, Thomas; Schindler, Philipp; Martinez, Esteban A.; Hennrich, Markus; Blatt, Rainer; Pusey, Matthew F.; Rudolph, Terry; Barrett, Jonathan
2016-01-01
A century after the development of quantum theory, the interpretation of a quantum state is still discussed. If a physicist claims to have produced a system with a particular quantum state vector, does this represent directly a physical property of the system, or is the state vector merely a summary of the physicist’s information about the system? Assume that a state vector corresponds to a probability distribution over possible values of an unknown physical or ‘ontic’ state. Then, a recent no-go theorem shows that distinct state vectors with overlapping distributions lead to predictions different from quantum theory. We report an experimental test of these predictions using trapped ions. Within experimental error, the results confirm quantum theory. We analyse which kinds of models are ruled out.
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Iijima, Byron; Meyer, Robert; Bar-Sever, Yoaz; Accad, Elie
2004-01-01
This paper evaluates the performance of a single-frequency receiver using the 1-Hz differential corrections as provided by NASA's global differential GPS system. While the dual-frequency user has the ability to eliminate the ionosphere error by taking a linear combination of observables, the single-frequency user must remove or calibrate this error by other means. To remove the ionosphere error we take advantage of the fact that the magnitude of the group delay in range observable and the carrier phase advance have the same magnitude but are opposite in sign. A way to calibrate this error is to use a real-time database of grid points computed by JPL's RTI (Real-Time Ionosphere) software. In both cases we evaluate the positional accuracy of a kinematic carrier phase based point positioning method on a global extent.
ERIC Educational Resources Information Center
Chen, Chau-Kuang
2010-01-01
Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…
JPRS Report, Science & Technology, China
1991-10-22
ZHONGGUO KEXUE BAO, 30 Aug 91] .......................................... 22 Shanghai Scientist Develops State-of-the-Art Liquid-Crystal Light Valve...the angle of attack will gradu- direction of the final velocity vector of the satellite are ally decrease under the action of aerodynamic moments...impulse and the direction of the thrust vector of the The recovery system, is located inside the sealed reentry retro-rocket engine, errors in the
Interplanetary medium data book, appendix
NASA Technical Reports Server (NTRS)
King, J. H.
1977-01-01
Computer generated listings of hourly average interplanetary plasma and magnetic field parameters are given. Parameters include proton temperature, proton density, bulk speed, an identifier of the source of the plasma data for the hour, average magnetic field magnitude and cartesian components of the magnetic field. Also included are longitude and latitude angles of the vector made up of the average field components, a vector standard deviation, and an identifier of the source of magnetic field data.
Multilayer perceptron, fuzzy sets, and classification
NASA Technical Reports Server (NTRS)
Pal, Sankar K.; Mitra, Sushmita
1992-01-01
A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.
Coherent Doppler Lidar for Boundary Layer Studies and Wind Energy
NASA Astrophysics Data System (ADS)
Choukulkar, Aditya
This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS RTM) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
ERIC Educational Resources Information Center
Malone, Amelia Schneider; Fuchs, Lynn S.
2015-01-01
The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of…
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
ERIC Educational Resources Information Center
Malone, Amelia S.; Fuchs, Lynn S.
2017-01-01
The three purposes of this study were to (a) describe fraction ordering errors among at-risk fourth grade students, (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors, and (c) examine the effect of students' ability to explain comparing problems on the probability…
NASA Astrophysics Data System (ADS)
Zeng, Y. Y.; Guo, J. Y.; Shang, K.; Shum, C. K.; Yu, J. H.
2015-09-01
Two methods for computing gravitational potential difference (GPD) between the GRACE satellites using orbit data have been formulated based on energy integral; one in geocentric inertial frame (GIF) and another in Earth fixed frame (EFF). Here we present a rigorous theoretical formulation in EFF with particular emphasis on necessary approximations, provide a computational approach to mitigate the approximations to negligible level, and verify our approach using simulations. We conclude that a term neglected or ignored in all former work without verification should be retained. In our simulations, 2 cycle per revolution (CPR) errors are present in the GPD computed using our formulation, and empirical removal of the 2 CPR and lower frequency errors can improve the precisions of Stokes coefficients (SCs) of degree 3 and above by 1-2 orders of magnitudes. This is despite of the fact that the result without removing these errors is already accurate enough. Furthermore, the relation between data errors and their influences on GPD is analysed, and a formal examination is made on the possible precision that real GRACE data may attain. The result of removing 2 CPR errors may imply that, if not taken care of properly, the values of SCs computed by means of the energy integral method using real GRACE data may be seriously corrupted by aliasing errors from possibly very large 2 CPR errors based on two facts: (1) errors of bar C_{2,0} manifest as 2 CPR errors in GPD and (2) errors of bar C_{2,0} in GRACE data-the differences between the CSR monthly values of bar C_{2,0} independently determined using GRACE and SLR are a reasonable measure of their magnitude-are very large. Our simulations show that, if 2 CPR errors in GPD vary from day to day as much as those corresponding to errors of bar C_{2,0} from month to month, the aliasing errors of degree 15 and above SCs computed using a month's GPD data may attain a level comparable to the magnitude of gravitational potential variation signal that GRACE was designed to recover. Consequently, we conclude that aliasing errors from 2 CPR errors in real GRACE data may be very large if not properly handled; and therefore, we propose an approach to reduce aliasing errors from 2 CPR and lower frequency errors for computing SCs above degree 2.
Observations on Polar Coding with CRC-Aided List Decoding
2016-09-01
9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector
Star tracker error analysis: Roll-to-pitch nonorthogonality
NASA Technical Reports Server (NTRS)
Corson, R. W.
1979-01-01
An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
Tuning support vector machines for minimax and Neyman-Pearson classification.
Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D
2010-10-01
This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.
Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Wahi, A. K.
2003-12-01
Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid discriminator of whether or not the estimator provides accurate estimates of the gradient magnitude and orientation. This research was funded by WIPP programs administered by the U.S Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Test of understanding of vectors: A reliable multiple-choice vector concept test
NASA Astrophysics Data System (ADS)
Barniol, Pablo; Zavala, Genaro
2014-06-01
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended problems in which a total of 2067 students participated. Using this taxonomy, we then designed a 20-item multiple-choice test [Test of understanding of vectors (TUV)] and administered it in English to 423 students who were completing the required sequence of introductory physics courses at a large private Mexican university. We evaluated the test's content validity, reliability, and discriminatory power. The results indicate that the TUV is a reliable assessment tool. We also conducted a detailed analysis of the students' understanding of the vector concepts evaluated in the test. The TUV is included in the Supplemental Material as a resource for other researchers studying vector learning, as well as instructors teaching the material.
Monsalve, Yoman; Panzera, Francisco; Herrera, Leidi; Triana-Chávez, Omar; Gómez-Palacio, Andrés
2016-06-01
The emerging vector of Chagas disease, Triatoma maculata (Hemiptera, Reduviidae), is one of the most widely distributed Triatoma species in northern South America. Despite its increasing relevance as a vector, no consistent picture of the magnitude of genetic and phenetic diversity has yet been developed. Here, several populations of T. maculata from eleven Colombia and Venezuela localities were analyzed based on the morphometry of wings and the mitochondrial NADH dehydrogenase subunit 4 (ND4) gene sequences. Our results showed clear morphometric and genetic differences among Colombian and Venezuelan populations, indicating high intraspecific diversity. Inter-population divergence is suggested related to East Cordillera in Colombia. Analyses of other populations from Colombia, Venezuela, and Brazil from distinct eco-geographic regions are still needed to understand its systematics and phylogeography as well as its actual role as a vector of Chagas disease. © 2016 The Society for Vector Ecology.
Ben Salem, Samira; Bacha, Khmais; Chaari, Abdelkader
2012-09-01
In this work we suggest an original fault signature based on an improved combination of Hilbert and Park transforms. Starting from this combination we can create two fault signatures: Hilbert modulus current space vector (HMCSV) and Hilbert phase current space vector (HPCSV). These two fault signatures are subsequently analysed using the classical fast Fourier transform (FFT). The effects of mechanical faults on the HMCSV and HPCSV spectrums are described, and the related frequencies are determined. The magnitudes of spectral components, relative to the studied faults (air-gap eccentricity and outer raceway ball bearing defect), are extracted in order to develop the input vector necessary for learning and testing the support vector machine with an aim of classifying automatically the various states of the induction motor. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kao, David
1999-01-01
The line integral convolution (LIC) technique has been known to be an effective tool for depicting flow patterns in a given vector field. There have been many extensions to make it run faster and reveal useful flow information such as velocity magnitude, motion, and direction. There are also extensions to unsteady flows and 3D vector fields. Surprisingly, none of these extensions automatically highlight flow features, which often represent the most important and interesting physical flow phenomena. In this sketch, a method for highlighting flow direction in LIC images is presented. The method gives an intuitive impression of flow direction in the given vector field and automatically reveals saddle points in the flow.
Transverse spin and transverse momentum in scattering of plane waves.
Saha, Sudipta; Singh, Ankit K; Ray, Subir K; Banerjee, Ayan; Gupta, Subhasish Dutta; Ghosh, Nirmalya
2016-10-01
We study the near field to the far field evolution of spin angular momentum (SAM) density and the Poynting vector of the scattered waves from spherical scatterers. The results show that at the near field, the SAM density and the Poynting vector are dominated by their transverse components. While the former (transverse SAM) is independent of the helicity of the incident circular polarization state, the latter (transverse Poynting vector) depends upon the polarization state. It is further demonstrated that the interference of the transverse electric and transverse magnetic scattering modes enhances both the magnitudes and the spatial extent of the transverse SAM and the transverse momentum components.
Separable decompositions of bipartite mixed states
NASA Astrophysics Data System (ADS)
Li, Jun-Li; Qiao, Cong-Feng
2018-04-01
We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.
Attitude control with realization of linear error dynamics
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1993-01-01
An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.
NASA Astrophysics Data System (ADS)
Gregorio, Fernando; Cousseau, Juan; Werner, Stefan; Riihonen, Taneli; Wichman, Risto
2011-12-01
The design of predistortion techniques for broadband multiple input multiple output-OFDM (MIMO-OFDM) systems raises several implementation challenges. First, the large bandwidth of the OFDM signal requires the introduction of memory effects in the PD model. In addition, it is usual to consider an imbalanced in-phase and quadrature (IQ) modulator to translate the predistorted baseband signal to RF. Furthermore, the coupling effects, which occur when the MIMO paths are implemented in the same reduced size chipset, cannot be avoided in MIMO transceivers structures. This study proposes a MIMO-PD system that linearizes the power amplifier response and compensates nonlinear crosstalk and IQ imbalance effects for each branch of the multiantenna system. Efficient recursive algorithms are presented to estimate the complete MIMO-PD coefficients. The algorithms avoid the high computational complexity in previous solutions based on least squares estimation. The performance of the proposed MIMO-PD structure is validated by simulations using a two-transmitter antenna MIMO system. Error vector magnitude and adjacent channel power ratio are evaluated showing significant improvement compared with conventional MIMO-PD systems.
Kim, Byung Gon; Bae, Sung Hyun; Kim, Hoon; Chung, Yun C
2017-05-29
We propose and demonstrate a simple composite second-order (CSO) cancellation technique based on the digital signal processing (DSP) for the radio-over-fiber (RoF) transmission system implemented by using directly modulated lasers (DMLs). When the RoF transmission system is implemented by using DMLs, its performance could be limited by the CSO distortions caused by the interplay between the DML's chirp and fiber's chromatic dispersion. We present the theoretical analysis of these nonlinear distortions and show that they can be suppressed at the receiver by using a simple DSP. To verify the effectiveness of the proposed technique, we demonstrate the transmission of twenty-four 100-MHz filtered orthogonal frequency-division multiplexing (f-OFDM) signals in 64 quadrature amplitude modulation (QAM) format over 20 km of the standard single-mode fiber (SSMF). The results show that, by using the proposed technique, we can suppress the CSO distortion components by >10 dB and achieve the error-vector magnitude performance better than 6% even after the 20-km long SSMF transmission.
NASA Astrophysics Data System (ADS)
Ma, Jianxin; Wang, Zhao; Zheng, Guoli
2014-04-01
A novel lightwave centralized full-duplex WDM-PON access network based on single sideband optical orthogonal frequency-division multiplexing (SSB-OOFDM) is proposed for providing wired and 60-GHz band wireless accesses alternately. At the OLT, the multi-channels with 10-Gb/s 4-QAM-RF-OFDM signals are SSB modulated on the optical local oscillators (OLOs). At the RN, one OOFDM signal along with two OLOs is abstracted and switched to the corresponding HONU, where the signal can be downconverted to the 10-GHz or 60-GHz band RF-OFDM signal by one OLO for wired or wireless access, while the other one is used to bear the uplink signal. Since the HONU is free from the light sources, the system complexity and cost are reduced. Full duplex transmission over 25 km fiber have been demonstrated that the error vector magnitude (EVM) of the down- and up-link signals are much below the FEC limit for both the wired and 60-GHz band wireless access services.
Lei, Yi; Li, Jianqiang; Fan, Yuting; Yu, Dawei; Fu, Songnian; Yin, Feifei; Dai, Yitang; Xu, Kun
2016-12-12
In this paper, we experimentally demonstrate space-division-multiplexed (SDM) transmission of IEEE 802.11ac-compliant 3-spatial-stream WLAN signals over 3 spatial modes of conventional 50um graded-index (GI) multimode fiber (MMF) employing non-mode-selective 3D-waveguide photonic lantern. Two kinds of scenarios, including fiber-only transmission and fiber-wireless hybrid transmission, were investigated by measuring error vector magnitude (EVM) performance for each stream and condition number (CN) of the channel matrix. The experimental results show that, SDM-based MMF link could offer a CN< 20dB well-conditioned MIMO channel over up to 1km fiber length within 0-6GHz, achieving as low as 2.38%, 2.97% and 2.11% EVM performance for 1km MMF link at 2.4GHz, 5.8GHz, and 200m MMF link followed by 1m air distance at 2.7GHz, respectively. These results indicate the possibility to distribute wireless MIMO signals over existing in-building commercially-available MMFs with enormous cost-saving.
Topography-Guided Transepithelial Surface Ablation in the Treatment of Moderate to High Astigmatism.
Chen, Xiangjun; Stojanovic, Aleksandar; Simonsen, David; Wang, Xiaorui; Liu, Yanhua; Utheim, Tor Paaske
2016-06-01
To analyze the outcomes of treatment of astigmatism of 2.00 diopters (D) or greater with topography-guided transepithelial surface ablation. Retrospective analysis of a series of 206 eyes divided into two groups: myopic astigmatism (153 eyes) and mixed astigmatism (53 eyes). All cases were treated with topography-guided transepithelial surface ablation. Efficacy, safety, and predictability were evaluated, and vector analysis of cylindrical correction was performed. The median preoperative spherical equivalent was -2.63 and -0.63 D for the myopic and mixed astigmatism groups, respectively, with median cylinder of -2.50 D. Postoperative uncorrected distance visual acuity was 20/20 or better in 92% and 83% of eyes in the myopic and mixed astigmatism groups, respectively; the corresponding efficacy indices were 1.00 and 0.96 and residual astigmatism of 0.50 D or less was present in 82.4% and 56.7% of eyes in the myopic and mixed astigmatism groups, respectively. The arithmetic mean magnitude of the difference vector was 0.38 (myopic) and 0.65 (mixed) D. Difference vector magnitude was positively correlated with the magnitude of target induced astigmatism in both groups. The geometric mean coefficient of adjustment index was 1.04 and 1.19, representing undercorrection of 4% and 19% in the myopic and mixed astigmatism groups, respectively. Topography-guided transepithelial ablation is a safe, effective, and predictable treatment for moderate to high astigmatism. [J Refract Surg. 2016;32(6):418-425.]. Copyright 2016, SLACK Incorporated.
Martin, James E.; Solis, Kyle Jameson
2015-11-09
It has recently been reported that two types of triaxial electric or magnetic fields can drive vorticity in dielectric or magnetic particle suspensions, respectively. The first type-symmetry -- breaking rational fields -- consists of three mutually orthogonal fields, two alternating and one dc, and the second type -- rational triads -- consists of three mutually orthogonal alternating fields. In each case it can be shown through experiment and theory that the fluid vorticity vector is parallel to one of the three field components. For any given set of field frequencies this axis is invariant, but the sign and magnitude ofmore » the vorticity (at constant field strength) can be controlled by the phase angles of the alternating components and, at least for some symmetry-breaking rational fields, the direction of the dc field. In short, the locus of possible vorticity vectors is a 1-d set that is symmetric about zero and is along a field direction. In this paper we show that continuous, 3-d control of the vorticity vector is possible by progressively transitioning the field symmetry by applying a dc bias along one of the principal axes. Such biased rational triads are a combination of symmetry-breaking rational fields and rational triads. A surprising aspect of these transitions is that the locus of possible vorticity vectors for any given field bias is extremely complex, encompassing all three spatial dimensions. As a result, the evolution of a vorticity vector as the dc bias is increased is complex, with large components occurring along unexpected directions. More remarkable are the elaborate vorticity vector orbits that occur when one or more of the field frequencies are detuned. As a result, these orbits provide the basis for highly effective mixing strategies wherein the vorticity axis periodically explores a range of orientations and magnitudes.« less
NASA Astrophysics Data System (ADS)
Karimi, Kurosh; Shirzaditabar, Farzad
2017-08-01
The analytic signal of magnitude of the magnetic field’s components and its first derivatives have been employed for locating magnetic structures, which can be considered as point-dipoles or line of dipoles. Although similar methods have been used for locating such magnetic anomalies, they cannot estimate the positions of anomalies in noisy states with an acceptable accuracy. The methods are also inexact in determining the depth of deep anomalies. In noisy cases and in places other than poles, the maximum points of the magnitude of the magnetic vector components and Az are not located exactly above 3D bodies. Consequently, the horizontal location estimates of bodies are accompanied by errors. Here, the previous methods are altered and generalized to locate deeper models in the presence of noise even at lower magnetic latitudes. In addition, a statistical technique is presented for working in noisy areas and a new method, which is resistant to noise by using a ‘depths mean’ method, is made. Reduction to the pole transformation is also used to find the most possible actual horizontal body location. Deep models are also well estimated. The method is tested on real magnetic data over an urban gas pipeline in the vicinity of Kermanshah province, Iran. The estimated location of the pipeline is accurate in accordance with the result of the half-width method.
The bee's map of the e-vector pattern in the sky.
Rossel, S; Wehner, R
1982-07-01
It has long been known that bees can use the pattern of polarized light in the sky as a compass cue even if they can see only a small part of the whole pattern. How they solve this problem has remained enigmatic. Here we show that the bees rely on a generalized celestial map that is used invariably throughout the day. We reconstruct this map by analyzing the navigation errors made by bees to which single e-vectors are displayed. In addition, we demonstrate how the bee's celestial map can be derived from the e-vector patterns in the sky.
Natung, Tanie; Taye, Trishna; Lyngdoh, Laura Amanda; Dkhar, Begonia; Hajong, Ranendra
2017-01-01
Purpose: To determine the magnitude and pattern of refractive errors among patients attending the ophthalmology department of a new medical college in North-East India. Materials and Methods: A prospective study of the new patients (age ≥5 years), who were phakic and whose unaided visual acuities were worse than 20/20 but improved with pinhole, was done. Complete ophthalmic examination and refraction with appropriate cycloplegia for age were done for the 4582 eligible patients. Spherical equivalents (SE) of refractive errors of the right eyes were used for analysis. Results: Of the 4582 eligible patients, 2546 patients had refractive errors (55.56%). The proportion of emmetropia (SE − 0.50–+0.50 diopter sphere [DS]), myopia (SE <−0.50 DS), high myopia (SE >−5.0 DS), and hypermetropia (>+0.50 DS for adults and >+2.0 DS for children) were 53.1%, 27.4%, 2.6%, and 16.9%, respectively. The proportion of hyperopia increased till 59 years and then decreased with age (P = 0.000). The proportion of myopia and high myopia decreased significantly with age after 39 years (P = 0.000 and P = 0.004, respectively). Of the 1510 patients with astigmatism, 17% had with-the-rule (WTR), 23.4% had against-the-rule (ATR), and 19% had oblique astigmatisms. The proportion of WTR and ATR astigmatisms significantly decreased (P = 0.000) and increased (P = 0.000) with age, respectively. Conclusions: This study has provided the magnitude and pattern of refractive errors in the study population. It will serve as the initial step for conducting community-based studies on the prevalence of refractive errors in this part of the country since such data are lacking from this region. Moreover, this study will help the primary care physicians to have an overview of the magnitude and pattern of refractive errors presenting to a health-care center as refractive error is an established and significant public health problem worldwide. PMID:29417005
Natung, Tanie; Taye, Trishna; Lyngdoh, Laura Amanda; Dkhar, Begonia; Hajong, Ranendra
2017-01-01
To determine the magnitude and pattern of refractive errors among patients attending the ophthalmology department of a new medical college in North-East India. A prospective study of the new patients (age ≥5 years), who were phakic and whose unaided visual acuities were worse than 20/20 but improved with pinhole, was done. Complete ophthalmic examination and refraction with appropriate cycloplegia for age were done for the 4582 eligible patients. Spherical equivalents (SE) of refractive errors of the right eyes were used for analysis. Of the 4582 eligible patients, 2546 patients had refractive errors (55.56%). The proportion of emmetropia (SE - 0.50-+0.50 diopter sphere [DS]), myopia (SE <-0.50 DS), high myopia (SE >-5.0 DS), and hypermetropia (>+0.50 DS for adults and >+2.0 DS for children) were 53.1%, 27.4%, 2.6%, and 16.9%, respectively. The proportion of hyperopia increased till 59 years and then decreased with age ( P = 0.000). The proportion of myopia and high myopia decreased significantly with age after 39 years ( P = 0.000 and P = 0.004, respectively). Of the 1510 patients with astigmatism, 17% had with-the-rule (WTR), 23.4% had against-the-rule (ATR), and 19% had oblique astigmatisms. The proportion of WTR and ATR astigmatisms significantly decreased ( P = 0.000) and increased ( P = 0.000) with age, respectively. This study has provided the magnitude and pattern of refractive errors in the study population. It will serve as the initial step for conducting community-based studies on the prevalence of refractive errors in this part of the country since such data are lacking from this region. Moreover, this study will help the primary care physicians to have an overview of the magnitude and pattern of refractive errors presenting to a health-care center as refractive error is an established and significant public health problem worldwide.
Astrometry of the h and χ Persei clusters based on the processing of digitized photographic plates
NASA Astrophysics Data System (ADS)
Muminov, Muydin; Yuldoshev, Qudrat; Ehgamberdiev, Shukhrat; Kahharov, Bakhtiyor; Relke, Helena; Protsyuk, Yury; Pakuliak, Ludmila; Andruk, Vitaly
2017-01-01
The work was carried out to ascertain the possibility of using the scanner Epson Expression 10000XL of Astronomical Institute Academy of Sciences of Uzbekistan for any astrometric and photometric works. Photographic plates were obtained with the normal astrograph of Astronomical Institute (D/F = 330mm/3467mm, M = 59.56 "/mm). The digitizing of photographic plates with linear dimensions 16 x 16 cm was made with a spatial resolution of 1200 dpi (1px = 1.25"). For this study the images of the first (1935.0) and second (1976.9 ) epochs were used in the sky area of 4 sq. degrees with the χ and h Persei open clusters. Positions and B-magnitudes of the stars were obtained in the system of the TYCHO2 reference catalogue. The errors of differences in the positions and proper motions for the 655 reference stars used for the astrometric reduction are σ_{αδ} = ± 0.0.074" and σ_{μαδ} = ± 0.0018"/year respectively. The internal photometric errors σ_m are ±0.065^m. The comparison of the determined B-magnitudes with the B-magnitudes of the TYCHO2 gave the error values of σ_B = ±0.208^m. The comparison of 8123 common stars down to B ≤ 17.5^m with UCAC4 gave the error values of σ_{αδ} = ±0.28", σ_{μαδ} = ±0.0075"/year and σ_m = ±0.139^m for positions, proper motions and stellar magnitudes respectively.
Massively Parallel Solution of Poisson Equation on Coarse Grain MIMD Architectures
NASA Technical Reports Server (NTRS)
Fijany, A.; Weinberger, D.; Roosta, R.; Gulati, S.
1998-01-01
In this paper a new algorithm, designated as Fast Invariant Imbedding algorithm, for solution of Poisson equation on vector and massively parallel MIMD architectures is presented. This algorithm achieves the same optimal computational efficiency as other Fast Poisson solvers while offering a much better structure for vector and parallel implementation. Our implementation on the Intel Delta and Paragon shows that a speedup of over two orders of magnitude can be achieved even for moderate size problems.
The role of model dynamics in ensemble Kalman filter performance for chaotic systems
Ng, G.-H.C.; McLaughlin, D.; Entekhabi, D.; Ahanin, A.
2011-01-01
The ensemble Kalman filter (EnKF) is susceptible to losing track of observations, or 'diverging', when applied to large chaotic systems such as atmospheric and ocean models. Past studies have demonstrated the adverse impact of sampling error during the filter's update step. We examine how system dynamics affect EnKF performance, and whether the absence of certain dynamic features in the ensemble may lead to divergence. The EnKF is applied to a simple chaotic model, and ensembles are checked against singular vectors of the tangent linear model, corresponding to short-term growth and Lyapunov vectors, corresponding to long-term growth. Results show that the ensemble strongly aligns itself with the subspace spanned by unstable Lyapunov vectors. Furthermore, the filter avoids divergence only if the full linearized long-term unstable subspace is spanned. However, short-term dynamics also become important as non-linearity in the system increases. Non-linear movement prevents errors in the long-term stable subspace from decaying indefinitely. If these errors then undergo linear intermittent growth, a small ensemble may fail to properly represent all important modes, causing filter divergence. A combination of long and short-term growth dynamics are thus critical to EnKF performance. These findings can help in developing practical robust filters based on model dynamics. ?? 2011 The Authors Tellus A ?? 2011 John Wiley & Sons A/S.
Chen, Xiyuan; Wang, Xiying; Xu, Yuan
2014-01-01
This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124
Chen, Xiyuan; Wang, Xiying; Xu, Yuan
2014-12-09
This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.
Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy
Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.
1998-01-01
We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.
PREDICTION OF SOLAR FLARE SIZE AND TIME-TO-FLARE USING SUPPORT VECTOR MACHINE REGRESSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucheron, Laura E.; Al-Ghraibah, Amani; McAteer, R. T. James
We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a geostationary operational environmental satellite (GOES) class. When we additionally consider non-flaring regions, we find an increased average error of approximately three-fourths a GOES class. We also consider thresholding the regressed flare size for the experimentmore » containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This is supported by our larger error rates of some 40 hr in the time-to-flare regression problem. The 38 magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem.« less
Evaluating and improving the representation of heteroscedastic errors in hydrological models
NASA Astrophysics Data System (ADS)
McInerney, D. J.; Thyer, M. A.; Kavetski, D.; Kuczera, G. A.
2013-12-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic predictions. In particular, residual errors of hydrological models are often heteroscedastic, with large errors associated with high rainfall and runoff events. Recent studies have shown that using a weighted least squares (WLS) approach - where the magnitude of residuals are assumed to be linearly proportional to the magnitude of the flow - captures some of this heteroscedasticity. In this study we explore a range of Bayesian approaches for improving the representation of heteroscedasticity in residual errors. We compare several improved formulations of the WLS approach, the well-known Box-Cox transformation and the more recent log-sinh transformation. Our results confirm that these approaches are able to stabilize the residual error variance, and that it is possible to improve the representation of heteroscedasticity compared with the linear WLS approach. We also find generally good performance of the Box-Cox and log-sinh transformations, although as indicated in earlier publications, the Box-Cox transform sometimes produces unrealistically large prediction limits. Our work explores the trade-offs between these different uncertainty characterization approaches, investigates how their performance varies across diverse catchments and models, and recommends practical approaches suitable for large-scale applications.
Comparing the TYCHO Catalogue with CCD Astrograph Observations
NASA Astrophysics Data System (ADS)
Zacharias, N.; Hoeg, E.; Urban, S. E.; Corbin, T. E.
1997-08-01
Selected fields around radio-optical reference frame sources have been observed with the U.S. Naval Observatory CCD astrograph (UCA). This telescope is equipped with a red-corrected 206mm 5-element lens and a 4k by 4k CCD camera which provides a 1 square degree field of view. Positions with internal precisions of 20 mas for stars in the 7 to 12 magnitude range have been obtained with 30 second exposures. A comparison is made with the Tycho Catalogue, which is accurate to about 5 to 50 mas at mean epoch of J1991.25, depending on the magnitude of the star. Preliminary proper motions are obtained using the Astrographic Catalogue (AC) to update the Tycho positions to the epoch of the UCA observations, which adds an error contribution of about 15 to 20 mas. Individual CCD frames have been reduced with an average of 30 Tycho reference stars per frame. A linear plate model gives an average adjustment standard error of 46 mas, consistent with the internal errors. The UCA is capable of significantly improving the positions of Tycho stars fainter than about visual magnitude 9.5.
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
Acetaminophen attenuates error evaluation in cortex.
Randles, Daniel; Kam, Julia W Y; Heine, Steven J; Inzlicht, Michael; Handy, Todd C
2016-06-01
Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants' ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual's Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Wu, X.; Heflin, M. B.; Schotman, H.; Vermeersen, B. L.; Dong, D.; Gross, R. S.; Ivins, E. R.; Moore, A. W.; Owen, S. E.
2009-12-01
Separating geodetic signatures of present-day surface mass trend and Glacial Isostatic Adjustment (GIA) requires multi-data types of different physical characteristics. We take a kinematic approach to the global simultaneous estimation problem. Three sets of global spherical harmonic coefficients from degree 1 to 60 of the present-day surface mass trend, vertical and horizontal GIA induced surface velocity fields, as well as rotation vectors of 15 major tectonic plates are solved for. The estimation is carried out using GRACE geoid trend, 3-dimensional velocities measured at 664 SLR/VLBI/GPS sites, the data-assimilated JPL ECCO ocean model. The ICE-5G/IJ05 (VM2) predictions are used as a priori GIA mean model. An a priori covariance matrix is constructed in the spherical harmonic domain for the GIA model by propagating the covariance matrices of random and geographically correlated ice thickness errors and upper/lower mantle viscosity errors so that the resulting magnitude and geographic pattern of the geoid uncertainties roughly reflect the difference between two recent GIA models. Unprecedented high-precision results are achieved. For example, geocenter velocities due to present-day surface mass trend and due to GIA are both determined to uncertainties of better than 0.1 mm/yr without using direct geodetic geocenter information. Information content of the data sets, future improvements, and benefits from new data will also be explored in the global inverse framework.
Wang, Huai-Yung; Chi, Yu-Chieh; Lin, Gong-Ru
2016-08-08
A novel millimeter-wave radio over fiber (MMW-RoF) link at carrier frequency of 35-GHz is proposed with the use of remotely beating MMW generation from reference master and injected slave colorless laser diode (LD) carriers at orthogonally polarized dual-wavelength injection-locking. The slave colorless LD supports lasing one of the dual-wavelength master modes with orthogonal polarizations, which facilitates the single-mode direct modulation of the quadrature amplitude modulation (QAM) orthogonal frequency division multiplexing (OFDM) data. Such an injected single-carrier encoding and coupled dual-carrier transmission with orthogonal polarization effectively suppresses the cross-heterodyne mode-beating intensity noise, the nonlinear modulation (NLM) and four-wave mixing (FWM) sidemodes during injection locking and fiber transmission. In 25-km single-mode fiber (SMF) based wireline system, the dual-carrier under single-mode encoding provides baseband 24-Gbit/s 64-QAM OFDM transmission with an error vector magnitude (EVM) of 8.8%, a bit error rate (BER) of 3.7 × 10-3, a power penalty of <1.5 dB. After remotely self-beating for wireless transmission, the beat MMW carrier at 35 GHz can deliver the passband 16-QAM OFDM at 4 Gbit/s to show corresponding EVM and BER of 15.5% and 1.4 × 10-3, respectively, after 25-km SMF and 1.6-m free-space transmission.
NASA Astrophysics Data System (ADS)
Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil
2016-09-01
The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.
An algorithm for targeting finite burn maneuvers
NASA Technical Reports Server (NTRS)
Barbieri, R. W.; Wyatt, G. H.
1972-01-01
An algorithm was developed to solve the following problem: given the characteristics of the engine to be used to make a finite burn maneuver and given the desired orbit, when must the engine be ignited and what must be the orientation of the thrust vector so as to obtain the desired orbit? The desired orbit is characterized by classical elements and functions of these elements whereas the control parameters are characterized by the time to initiate the maneuver and three direction cosines which locate the thrust vector. The algorithm was built with a Monte Carlo capability whereby samples are taken from the distribution of errors associated with the estimate of the state and from the distribution of errors associated with the engine to be used to make the maneuver.
[Gene therapy for the treatment of inborn errors of metabolism].
Pérez-López, Jordi
2014-06-16
Due to the enzymatic defect in inborn errors of metabolism, there is a blockage in the metabolic pathways and an accumulation of toxic metabolites. Currently available therapies include dietary restriction, empowering of alternative metabolic pathways, and the replacement of the deficient enzyme by cell transplantation, liver transplantation or administration of the purified enzyme. Gene therapy, using the transfer in the body of the correct copy of the altered gene by a vector, is emerging as a promising treatment. However, the difficulty of vectors currently used to cross the blood brain barrier, the immune response, the cellular toxicity and potential oncogenesis are some limitations that could greatly limit its potential clinical application in human beings. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Development of a two-dimensional dual pendulum thrust stand for Hall thrusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagao, N.; Yokota, S.; Komurasaki, K.
A two-dimensional dual pendulum thrust stand was developed to measure thrust vectors (axial and horizontal (transverse) direction thrusts) of a Hall thruster. A thruster with a steering mechanism is mounted on the inner pendulum, and thrust is measured from the displacement between inner and outer pendulums, by which a thermal drift effect is canceled out. Two crossover knife-edges support each pendulum arm: one is set on the other at a right angle. They enable the pendulums to swing in two directions. Thrust calibration using a pulley and weight system showed that the measurement errors were less than 0.25 mN (1.4%)more » in the main thrust direction and 0.09 mN (1.4%) in its transverse direction. The thrust angle of the thrust vector was measured with the stand using the thruster. Consequently, a vector deviation from the main thrust direction of {+-}2.3 deg. was measured with the error of {+-}0.2 deg. under the typical operating conditions for the thruster.« less
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Gold, Peter O.; Cowgill, Eric; Kreylos, Oliver; Gold, Ryan D.
2012-01-01
Three-dimensional (3D) slip vectors recorded by displaced landforms are difficult to constrain across complex fault zones, and the uncertainties associated with such measurements become increasingly challenging to assess as landforms degrade over time. We approach this problem from a remote sensing perspective by using terrestrial laser scanning (TLS) and 3D structural analysis. We have developed an integrated TLS data collection and point-based analysis workflow that incorporates accurate assessments of aleatoric and epistemic uncertainties using experimental surveys, Monte Carlo simulations, and iterative site reconstructions. Our scanning workflow and equipment requirements are optimized for single-operator surveying, and our data analysis process is largely completed using new point-based computing tools in an immersive 3D virtual reality environment. In a case study, we measured slip vector orientations at two sites along the rupture trace of the 1954 Dixie Valley earthquake (central Nevada, United States), yielding measurements that are the first direct constraints on the 3D slip vector for this event. These observations are consistent with a previous approximation of net extension direction for this event. We find that errors introduced by variables in our survey method result in <2.5 cm of variability in components of displacement, and are eclipsed by the 10–60 cm epistemic errors introduced by reconstructing the field sites to their pre-erosion geometries. Although the higher resolution TLS data sets enabled visualization and data interactivity critical for reconstructing the 3D slip vector and for assessing uncertainties, dense topographic constraints alone were not sufficient to significantly narrow the wide (<26°) range of allowable slip vector orientations that resulted from accounting for epistemic uncertainties.
NASA Astrophysics Data System (ADS)
Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.
2010-10-01
The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.
Gibbs, P E; Kilbey, B J; Banerjee, S K; Lawrence, C W
1993-05-01
We have compared the mutagenic properties of a T-T cyclobutane dimer in baker's yeast, Saccharomyces cerevisiae, with those in Escherichia coli by transforming each of these species with the same single-stranded shuttle vector carrying either the cis-syn or the trans-syn isomer of this UV photoproduct at a unique site. The mutagenic properties investigated were the frequency of replicational bypass of the photoproduct, the error rate of bypass, and the mutation spectrum. In SOS-induced E. coli, the cis-syn dimer was bypassed in approximately 16% of the vector molecules, and 7.6% of the bypass products had targeted mutations. In S. cerevisiae, however, bypass occurred in about 80% of these molecules, and the bypass was at least 19-fold more accurate (approximately 0.4% targeted mutations). Each of these yeast mutations was a single unique event, and none were like those in E. coli, suggesting that in fact the difference in error rate is much greater. Bypass of the trans-syn dimer occurred in about 17% of the vector molecules in both species, but with this isomer the error rate was higher in S. cerevisiae (21 to 36% targeted mutations) than in E. coli (13%). However, the spectra of mutations induced by the latter photoproduct were virtually identical in the two organisms. We conclude that bypass and error frequencies are determined both by the structure of the photoproduct-containing template and by the particular replication proteins concerned but that the types of mutations induced depend predominantly on the structure of the template. Unlike E. coli, bypass in S. cerevisiae did not require UV-induced functions.
Support of Mark III Optical Interferometer
1988-11-01
error, and low visibility* pedestal, and the surface of a zerodur sphere attached to the mirror errors are not entirely consistent. as shown in Fig. 7...of’ stellar usually associated with the primary mirror of a large astronomical interferometers at Mt. Wilson Observatory. The first instrument...the two siderostats is directed toward the central building by fixed mirrors . These fixed mirrors are necessary to keep the polarization - vectors
Defense Mapping Agency (DMA) Raster-to-Vector Analysis
1984-11-30
model) to pinpoint critical deficiencies and understand trade-offs between alternative solutions. This may be exemplified by the allocation of human ...process, prone to errors (i.e., human operator eye/motor control limitations), and its time consuming nature (as a function of data density). It should...achieved through the facilities of coinputer interactive graphics. Each error or anomaly is individually identified by a human operator and corrected
NASA Astrophysics Data System (ADS)
Byun, Do-Seong; Hart, Deirdre E.
2017-04-01
Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.
BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.
CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
Masking of errors in transmission of VAPC-coded speech
NASA Technical Reports Server (NTRS)
Cox, Neil B.; Froese, Edwin L.
1990-01-01
A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.
Correlated and Zonal Errors of Global Astrometric Missions: A Spherical Harmonic Solution
NASA Astrophysics Data System (ADS)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.
2012-07-01
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions
NASA Astrophysics Data System (ADS)
Peacock, Sheila; Douglas, Alan; Bowers, David
2017-08-01
Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.
Network Adjustment of Orbit Errors in SAR Interferometry
NASA Astrophysics Data System (ADS)
Bahr, Hermann; Hanssen, Ramon
2010-03-01
Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.
Forces and moments generated by the human arm: Variability and control
Xu, Y; Terekhov, AV; Latash, ML; Zatsiorsky, VM
2012-01-01
This is an exploratory study of the accurate endpoint force vector production by the human arm in isometric conditions. We formulated three common-sense hypotheses and falsified them in the experiment. The subjects (n=10) exerted static forces on the handle in eight directions in a horizontal plane for 25 seconds. The forces were of 4 magnitude levels (10 %, 20%, 30% and 40% of individual MVC). The torsion moment on the handle (grasp moment) was not specified in the instruction. The two force components and the grasp moment were recorded, and the shoulder, elbow, and wrist joint torques were computed. The following main facts were observed: (a) While the grasp moment was not prescribed by the instruction, it was always produced. The moment magnitude and direction depended on the instructed force magnitude and direction. (b) The within-trial angular variability of the exerted force vector (angular precision) did not depend on the target force magnitude (a small negative correlation was observed). (c) Across the target force directions, the variability of the exerted force magnitude and directional variability exhibited opposite trends: In the directions where the variability of force magnitude was maximal, the directional variability was minimal and vice versa. (d) The time profiles of joint torques in the trials were always positively correlated, even for the force directions where flexion torque was produced at one joint and extension torque was produced at the other joint. (e) The correlations between the grasp moment and the wrist torque were negative across the tasks and positive within the individual trials. (f) In static serial kinematic chains, the pattern of the joint torques distribution could not be explained by an optimization cost function additive with respect to the torques. Plans for several future experiments have been suggested. PMID:23080084
2011-04-01
roll rates are estimates of projectile roll rates with respect to the sun and the local geomagnetic field respectively. The solar aspect angle is the...vector and a vector originating at the CG and parallel to the local geomagnetic field. Methodologies employed to obtain these and other airframe states...and an independent approach (POINTER) and relative magnitude information about the side moments was obtained. VAPP-24 underwent a reversal in coning
New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction
NASA Astrophysics Data System (ADS)
Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.
2017-12-01
Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.
Building a kinetic Monte Carlo model with a chosen accuracy.
Bhute, Vijesh J; Chatterjee, Abhijit
2013-06-28
The kinetic Monte Carlo (KMC) method is a popular modeling approach for reaching large materials length and time scales. The KMC dynamics is erroneous when atomic processes that are relevant to the dynamics are missing from the KMC model. Recently, we had developed for the first time an error measure for KMC in Bhute and Chatterjee [J. Chem. Phys. 138, 084103 (2013)]. The error measure, which is given in terms of the probability that a missing process will be selected in the correct dynamics, requires estimation of the missing rate. In this work, we present an improved procedure for estimating the missing rate. The estimate found using the new procedure is within an order of magnitude of the correct missing rate, unlike our previous approach where the estimate was larger by orders of magnitude. This enables one to find the error in the KMC model more accurately. In addition, we find the time for which the KMC model can be used before a maximum error in the dynamics has been reached.
Adaptive Identification of Fluid-Dynamic Systems
2001-06-14
Fig. 1. Unknown System Adaptive Filter Σ _ + Input u Filter Output y Desired Output d Error e Fig. 1. Modeling of a SISO system using...2J E e n = (12) Here [ ]. E is the expectation operator and ( ) ( ) ( ) e n d n y n= − is the error between the desired system output and...B … input vector ( ) ( ) ( ) ( )[ ], , ,1 1 Tn u n u n u n N= − − +U … output and error ( ) ( ) ( ) ( ) ( ) ( ) ( ) T T y n n n e n d n n n
Fast Plane Wave 2-D Vector Flow Imaging Using Transverse Oscillation and Directional Beamforming.
Jensen, Jonas; Villagomez Hoyos, Carlos Armando; Stuart, Matthias Bo; Ewertsen, Caroline; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt
2017-07-01
Several techniques can estimate the 2-D velocity vector in ultrasound. Directional beamforming (DB) estimates blood flow velocities with a higher precision and accuracy than transverse oscillation (TO), but at the cost of a high beamforming load when estimating the flow angle. In this paper, it is proposed to use TO to estimate an initial flow angle, which is then refined in a DB step. Velocity magnitude is estimated along the flow direction using cross correlation. It is shown that the suggested TO-DB method can improve the performance of velocity estimates compared with TO, and with a beamforming load, which is 4.6 times larger than for TO and seven times smaller than for conventional DB. Steered plane wave transmissions are employed for high frame rate imaging, and parabolic flow with a peak velocity of 0.5 m/s is simulated in straight vessels at beam-to-flow angles from 45° to 90°. The TO-DB method estimates the angle with a bias and standard deviation (SD) less than 2°, and the SD of the velocity magnitude is less than 2%. When using only TO, the SD of the angle ranges from 2° to 17° and for the velocity magnitude up to 7%. Bias of the velocity magnitude is within 2% for TO and slightly larger but within 4% for TO-DB. The same trends are observed in measurements although with a slightly larger bias. Simulations of realistic flow in a carotid bifurcation model provide visualization of complex flow, and the spread of velocity magnitude estimates is 7.1 cm/s for TO-DB, while it is 11.8 cm/s using only TO. However, velocities for TO-DB are underestimated at peak systole as indicated by a regression value of 0.97 for TO and 0.85 for TO-DB. An in vivo scanning of the carotid bifurcation is used for vector velocity estimations using TO and TO-DB. The SD of the velocity profile over a cardiac cycle is 4.2% for TO and 3.2% for TO-DB.
Ocular residual astigmatism (ORA) in pre-cataract eyes prior to and after refractive lens exchange.
Katz, Toam; Steinberg, Johannes; Druchkiv, Vasyl; Linke, Stephan J; Frings, Andreas
2017-08-01
The purpose of this study was to analyze ocular residual astigmatism (ORA) before and after implantation of two different optical types of non-toric multifocal intraocular lenses (MIOL) in pre-cataract patients. This retrospective cohort study analyzed 72 eyes from 72 consecutive patients after MIOL surgery . To investigate magnitude and axis of astigmatic changes, the concepts of true corneal astigmatism and Alpins vector method were applied. There were no statistically significant between-group differences prior to surgery. The mean refractive surgically induced astigmatism (RSIA) (P = 0.063) and the topographic SIA (TSIA) (P = 0.828) did not differ significantly between the lenses, and the summated vector mean for ORA was reduced in terms of magnitude by approximately 0.30 Diopter. ORA in pseudophakic eyes mainly results from the posterior corneal surface and less from IOL tilting, postoperative posterior capsule shrinkage, or secondary cataract.
Wavelet based approach for posture transition estimation using a waist worn accelerometer.
Bidargaddi, Niranjan; Klingbeil, Lasse; Sarela, Antti; Boyle, Justin; Cheung, Vivian; Yelland, Catherine; Karunanithi, Mohanraj; Gray, Len
2007-01-01
The ability to rise from a chair is considered to be important to achieve functional independence and quality of life. This sit-to-stand task is also a good indicator to assess condition of patients with chronic diseases. We developed a wavelet based algorithm for detecting and calculating the durations of sit-to-stand and stand-to-sit transitions from the signal vector magnitude of the measured acceleration signal. The algorithm was tested on waist worn accelerometer data collected from young subjects as well as geriatric patients. The test demonstrates that both transitions can be detected by using wavelet transformation applied to signal magnitude vector. Wavelet analysis produces an estimate of the transition pattern that can be used to calculate the transition duration that further gives clinically significant information on the patients condition. The method can be applied in a real life ambulatory monitoring system for assessing the condition of a patient living at home.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
Vectorcardiographic changes during extended space flight
NASA Technical Reports Server (NTRS)
Smith, R. F.; Stanton, K.; Stoop, D.; Brown, D.; Janusz, W.; King, P.
1974-01-01
To assess the effects of space flight on cardiac electrical properties, vectorcardiograms were taken on the 9 Skylab astronauts during the flights of 28, 59, and 84 days. The Frank lead system was used and observations were made at rest; during 25%, 50% and 75% of maximum exercise; during a short pulse of exercise (150 watts, 2 minutes); and after exercise. Data from 131 in-flight tests were analyzed by computer and compared to preflight and postflight values. Statistically significant increase in QRS vector magnitude (six of nine crewmen); T vector magnitude (five of nine crewmen); and resting PR interval duration (six of nine crewmen) occurred. During exercise the PR interval did not differ from preflight. Exercise heart rates inflight were the same as preflight, but increased in the immediate postflight period. With the exception of the arrhythmias, no deleterious vectorcardiographic changes were observed during the Skylab missions.
Iterative inversion of deformation vector fields with feedback control.
Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei
2018-05-14
Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H. Lee; Ganti, Anand; Resnick, David R
2013-10-22
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Design, decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-06-17
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-11-18
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Robust support vector regression networks for function approximation with outliers.
Chuang, Chen-Chia; Su, Shun-Feng; Jeng, Jin-Tsong; Hsiao, Chih-Ching
2002-01-01
Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.
Zhou, Wen; Li, Xinying; Yu, Jianjun
2017-10-30
We propose QPSK millimeter-wave (mm-wave) vector signal generation for D-band based on balanced precoding-assisted photonic frequency quadrupling technology employing a single intensity modulator without an optical filter. The intensity MZM is driven by a balanced pre-coding 37-GHz QPSK RF signal. The modulated optical subcarriers are directly sent into the single ended photodiode to generate 148-GHz QPSK vector signal. We experimentally demonstrate 1-Gbaud 148-GHz QPSK mm-wave vector signal generation, and investigate the bit-error-rate (BER) performance of the vector signals at 148-GHz. The experimental results show that the BER value can be achieved as low as 1.448 × 10 -3 when the optical power into photodiode is 8.8dBm. To the best of our knowledge, it is the first time to realize the frequency-quadrupling vector mm-wave signal generation at D-band based on only one MZM without an optical filter.
VizieR Online Data Catalog: 1876 open clusters multimembership catalog (Sampedro+, 2017)
NASA Astrophysics Data System (ADS)
Sampedro, L.; Dias, W. S.; Alfaro, E. J.; Monteiro, H.; Molino, A.
2017-10-01
We use version 3.5 of the New Optically Visible Open Clusters and Candidates catalogue (hereafter DAML02; Dias et al., 2002, Cat. B/ocl), to select a sample of 2167 open clusters to be analysed. The stellar positions and the proper motions are taken from the UCAC4 (Zacharias et al., 2013, Cat. I/322). The catalogue contains data for over 113 million stars (105 million of them with proper-motion data), and is complete down to magnitude R=16. The positional accuracy of the listed objects is about 15-100mas per coordinate, depending on the magnitude. Formal errors in proper motions range from about 1 to 10mas/yr, depending on the magnitude and the observational history. Systematic errors in the proper motions are estimated to be about 1-4mas/yr. (2 data files).
Rift Valley Fever Prediction and Risk Mapping: 2014-2015 Season
NASA Technical Reports Server (NTRS)
Anyamba, Assaf
2015-01-01
Extremes in either direction (+-) of precipitation temperature have significant implications for disease vectors and pathogen emergence and spread Magnitude of ENSO influence on precipitation temperature cannot be currently predicted rely on average history and patterns. Timing of event and emergence disease can be exploited (GAP) in to undertake vector control and preparedness measures. Currently - no risk for ecologically-coupled RVFV activity however we need to be vigilant during the coming fall season due the ongoing buildup of energy in the central Pacific Ocean. Potential for the dual-use of the RVF Monitor system for other VBDs Need to invest in early ground surveillance and the use of rapid field diagnostic capabilities for vector identification and virus isolation.
Using a multifrontal sparse solver in a high performance, finite element code
NASA Technical Reports Server (NTRS)
King, Scott D.; Lucas, Robert; Raefsky, Arthur
1990-01-01
We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Evaluation of the table Mountain Ronchi telescope for angular tracking
NASA Technical Reports Server (NTRS)
Lanyi, G.; Purcell, G.; Treuhaft, R.; Buffington, A.
1992-01-01
The performance of the University of California at San Diego (UCSD) Table Mountain telescope was evaluated to determine the potential of such an instrument for optical angular tracking. This telescope uses a Ronchi ruling to measure differential positions of stars at the meridian. The Ronchi technique is summarized and the operational features of the Table Mountain instrument are described. Results from an analytic model, simulations, and actual data are presented that characterize the telescope's current performance. For a star pair of visual magnitude 7, the differential uncertainty of a 5-min observation is about 50 nrad (10 marcsec), and tropospheric fluctuations are the dominant error source. At magnitude 11, the current differential uncertainty is approximately 800 nrad (approximately 170 marcsec). This magnitude is equivalent to that of a 2-W laser with a 0.4-m aperture transmitting to Earth from a spacecraft at Saturn. Photoelectron noise is the dominant error source for stars of visual magnitude 8.5 and fainter. If the photoelectron noise is reduced, ultimately tropospheric fluctuations will be the limiting source of error at an average level of 35 nrad (7 marcsec) for stars approximately 0.25 deg apart. Three near-term strategies are proposed for improving the performance of the telescope to the 10-nrad level: improving the efficiency of the optics, masking background starlight, and averaging tropospheric fluctuations over multiple observations.
NASA Astrophysics Data System (ADS)
Ochoa Gutierrez, L. H.; Vargas Jiménez, C. A.; Niño Vasquez, L. F., Sr.
2017-12-01
Early warning generation for earthquakes that occur near the city of Bogotá-Colombia is extremely important. Using the information of a broadband and three component station, property of the Servicio Geológico Colombiano (SGC), called El Rosal, which is located very near the city, we developed a model based on support vector machines techniques (SVM), with a standardized polynomial kernel, using as descriptors or input data, seismic signal features, complemented by the hipocentral parameters calculated for each one of the reported events. The model was trained and evaluated by cross correlation and was used to predict, with only five seconds of signal, the magnitude and location of a seismic event. With the proposed model we calculated local magnitude with an accuracy of 0.19 units of magnitude, epicentral distance with an accuracy of about 11 k, depth with a precision of approximately 40 km and the azimuth of arrival with a precision of 45°. This research made a significant contribution for early warning generation for the country, in particular for the city of Bogotá. These models will be implemented in the future in the "Red Sismológica de la Sabana de Bogotá y sus Alrededores (RSSB)" which belongs to the Universidad Nacional de Colombia.
NASA Technical Reports Server (NTRS)
Bollman, W. E.; Chadwick, C.
1982-01-01
A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.
A diagram for evaluating multiple aspects of model performance in simulating vector fields
NASA Astrophysics Data System (ADS)
Xu, Zhongfeng; Hou, Zhaolu; Han, Ying; Guo, Weidong
2016-12-01
Vector quantities, e.g., vector winds, play an extremely important role in climate systems. The energy and water exchanges between different regions are strongly dominated by wind, which in turn shapes the regional climate. Thus, how well climate models can simulate vector fields directly affects model performance in reproducing the nature of a regional climate. This paper devises a new diagram, termed the vector field evaluation (VFE) diagram, which is a generalized Taylor diagram and able to provide a concise evaluation of model performance in simulating vector fields. The diagram can measure how well two vector fields match each other in terms of three statistical variables, i.e., the vector similarity coefficient, root mean square length (RMSL), and root mean square vector difference (RMSVD). Similar to the Taylor diagram, the VFE diagram is especially useful for evaluating climate models. The pattern similarity of two vector fields is measured by a vector similarity coefficient (VSC) that is defined by the arithmetic mean of the inner product of normalized vector pairs. Examples are provided, showing that VSC can identify how close one vector field resembles another. Note that VSC can only describe the pattern similarity, and it does not reflect the systematic difference in the mean vector length between two vector fields. To measure the vector length, RMSL is included in the diagram. The third variable, RMSVD, is used to identify the magnitude of the overall difference between two vector fields. Examples show that the VFE diagram can clearly illustrate the extent to which the overall RMSVD is attributed to the systematic difference in RMSL and how much is due to the poor pattern similarity.
Nucleon form factors from quenched lattice QCD with domain wall fermions
NASA Astrophysics Data System (ADS)
Sasaki, Shoichi; Yamazaki, Takeshi
2008-07-01
We present a quenched lattice calculation of the weak nucleon form factors: vector [FV(q2)], induced tensor [FT(q2)], axial vector [FA(q2)] and induced pseudoscalar [FP(q2)] form factors. Our simulations are performed on three different lattice sizes L3×T=243×32, 163×32, and 123×32 with a lattice cutoff of a-1≈1.3GeV and light quark masses down to about 1/4 the strange quark mass (mπ≈390MeV) using a combination of the DBW2 gauge action and domain wall fermions. The physical volume of our largest lattice is about (3.6fm)3, where the finite volume effects on form factors become negligible and the lower momentum transfers (q2≈0.1GeV2) are accessible. The q2 dependences of form factors in the low q2 region are examined. It is found that the vector, induced tensor, and axial-vector form factors are well described by the dipole form, while the induced pseudoscalar form factor is consistent with pion-pole dominance. We obtain the ratio of axial to vector coupling gA/gV=FA(0)/FV(0)=1.219(38) and the pseudoscalar coupling gP=mμFP(0.88mμ2)=8.15(54), where the errors are statistical errors only. These values agree with experimental values from neutron β decay and muon capture on the proton. However, the root mean-squared radii of the vector, induced tensor, and axial vector underestimate the known experimental values by about 20%. We also calculate the pseudoscalar nucleon matrix element in order to verify the axial Ward-Takahashi identity in terms of the nucleon matrix elements, which may be called as the generalized Goldberger-Treiman relation.
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
Differing Air Traffic Controller Responses to Similar Trajectory Prediction Errors
NASA Technical Reports Server (NTRS)
Mercer, Joey; Hunt-Espinosa, Sarah; Bienert, Nancy; Laraway, Sean
2016-01-01
A Human-In-The-Loop simulation was conducted in January of 2013 in the Airspace Operations Laboratory at NASA's Ames Research Center. The simulation airspace included two en route sectors feeding the northwest corner of Atlanta's Terminal Radar Approach Control. The focus of this paper is on how uncertainties in the study's trajectory predictions impacted the controllers ability to perform their duties. Of particular interest is how the controllers interacted with the delay information displayed in the meter list and data block while managing the arrival flows. Due to wind forecasts with 30-knot over-predictions and 30-knot under-predictions, delay value computations included errors of similar magnitude, albeit in opposite directions. However, when performing their duties in the presence of these errors, did the controllers issue clearances of similar magnitude, albeit in opposite directions?
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing
Matochko, Wadim L.; Derda, Ratmir
2013-01-01
Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (S a). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of S a and use them to define the sequencing operator (S e q). Sequencing without any bias and errors is S e q = S a IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (C E N), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071
Adams, C N; Kattawar, G W
1993-08-20
We have developed a Monte Carlo program that is capable of calculating both the scalar and the Stokes vector radiances in an atmosphere-ocean system in a single computer run. The correlated sampling technique is used to compute radiance distributions for both the scalar and the Stokes vector formulations simultaneously, thus permitting a direct comparison of the errors induced. We show the effect of the volume-scattering phase function on the errors in radiance calculations when one neglects polarization effects. The model used in this study assumes a conservative Rayleigh-scattering atmosphere above a flat ocean. Within the ocean, the volume-scattering function (the first element in the Mueller matrix) is varied according to both a Henyey-Greenstein phase function, with asymmetry factors G = 0.0, 0.5, and 0.9, and also to a Rayleigh-scattering phase function. The remainder of the reduced Mueller matrix for the ocean is taken to be that for Rayleigh scattering, which is consistent with ocean water measurement.
Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi
2017-08-01
The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.
Cohen, Aaron M
2008-01-01
We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.
Adaptive vector validation in image velocimetry to minimise the influence of outlier clusters
NASA Astrophysics Data System (ADS)
Masullo, Alessandro; Theunissen, Raf
2016-03-01
The universal outlier detection scheme (Westerweel and Scarano in Exp Fluids 39:1096-1100, 2005) and the distance-weighted universal outlier detection scheme for unstructured data (Duncan et al. in Meas Sci Technol 21:057002, 2010) are the most common PIV data validation routines. However, such techniques rely on a spatial comparison of each vector with those in a fixed-size neighbourhood and their performance subsequently suffers in the presence of clusters of outliers. This paper proposes an advancement to render outlier detection more robust while reducing the probability of mistakenly invalidating correct vectors. Velocity fields undergo a preliminary evaluation in terms of local coherency, which parametrises the extent of the neighbourhood with which each vector will be compared subsequently. Such adaptivity is shown to reduce the number of undetected outliers, even when implemented in the afore validation schemes. In addition, the authors present an alternative residual definition considering vector magnitude and angle adopting a modified Gaussian-weighted distance-based averaging median. This procedure is able to adapt the degree of acceptable background fluctuations in velocity to the local displacement magnitude. The traditional, extended and recommended validation methods are numerically assessed on the basis of flow fields from an isolated vortex, a turbulent channel flow and a DNS simulation of forced isotropic turbulence. The resulting validation method is adaptive, requires no user-defined parameters and is demonstrated to yield the best performances in terms of outlier under- and over-detection. Finally, the novel validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.
VizieR Online Data Catalog: HST VI Photometry of Six LMC Old Globular Clusters (Olsen+ 1998)
NASA Astrophysics Data System (ADS)
Olsen, K. A. G.; Hodge, P. W.; Mateo, M.; Olszewski, E. W.; Schommer, R. A.; Suntzeff, N. B.; Walker, A. R.
1998-11-01
The following tables contain the results of photometry performed on Hubble Space Telescope WFPC2 images of the Large Magellanic Cloud globular clusters NGC 1754, 1835, 1898, 1916, 2005, and 2019. The magnitudes reported here were measured from Planetary Camera F555W and F814W images using DoPHOT (Schechter, Mateo, & Saha 1993) and afterwards transformed to Johnson V/Kron-Cousins I using equation 9 of Holtzman et al. (1995PASP..107.1065H). We carried out photometry on both long (1500 sec combined in F555W, 1800 sec in F814W) and short (40 sec combined in F555W, 60 sec in F814W) exposures. Where the short exposure photometry produced smaller errors, we report those magnitudes in place of those measured from the long exposures. For each star, we give an integer identifier, its x and y pixel position as measured in the F555W PC image, its V and I magnitude, the photometric errors reported by DoPHOT, both the V and I DoPHOT object types (multiplied by 10 if the reported magnitude was measured in the short exposure frame), and a flag if the star was removed during our procedure for statistical field star subtraction. Summary of data reduction and assessment of photometric accuracy: Cosmic ray rejection, correction for the y-dependent CTE effect (Holtzman et al. 1995a), geometric distortion correction, and bad pixel flagging were applied to the images before performing photometry. For the photometry, we used version 2.5 of DoPHOT, modified by Eric Deutsch to handle floating-point images. We found that there were insufficient numbers of bright, isolated stars in the PC frames for producing aperture corrections. Aperture corrections as a function of position in the frame were instead derived using WFPC2 point spread functions kindly provided by Peter Stetson. As these artificially generated aperture corrections agree well with ones derived from isolated stars in the WF chips, we trust that they are reliable to better than 0.05 mag. In agreement with the report of Whitmore & Heyer (1997), we found an offset in mean magnitudes between the short- and long-exposure photometry. We corrected for this effect by adjusting the short-exposure magnitudes to match, on average, those of the long exposures. Finally, we merged the short- and long- exposure lists of photometry as described above and transformed the magnitudes from the WFPC2 system to Johnson V/Kron-Cousins I, applying the Holtzman et al. (1995PASP..107.1065H) zero points. Statistical field star subtraction was performed using color-magnitude diagrams of the field stars produced from the combined WF frames. Completeness and random and systematic errors in the photometry were extensively modelled through artificial star tests. Crowding causes the completeness to be a strong function of position in the frame, with detection being most difficult near the cluster centers. In addition, we found that crowding introduces systematic errors in the photometry, generally <0.05 mag, that depend on the V-I and V of the star. Fortunately, these errors are well-understood. However, unknown errors in the zero points may persist at the ~0.05 mag level. (5 data files).
VizieR Online Data Catalog: HST VI Photometry of Six LMC Old Globular Clusters (Olsen+ 1998)
NASA Astrophysics Data System (ADS)
Olsen, K. A. G.; Hodge, P. W.; Mateo, M.; Olszewski, E. W.; Schommer, R. A.; Suntzeff, N. B.; Walker, A. R.
1998-11-01
The following tables contain the results of photometry performed on Hubble Space Telescope WFPC2 images of the Large Magellanic Cloud globular clusters NGC 1754, 1835, 1898, 1916, 2005, and 2019. The magnitudes reported here were measured from Planetary Camera F555W and F814W images using DoPHOT (Schechter, Mateo, & Saha 1993) and afterwards transformed to Johnson V/Kron-Cousins I using equation 9 of Holtzman et al. (1995PASP..107.1065H). We carried out photometry on both long (1500 sec combined in F555W, 1800 sec in F814W) and short (40 sec combined in F555W, 60 sec in F814W) exposures. Where the short exposure photometry produced smaller errors, we report those magnitudes in place of those measured from the long exposures. For each star, we give an integer identifier, its x and y pixel position as measured in the F555W PC image, its V and I magnitude, the photometric errors reported by DoPHOT, both the V and I DoPHOT object types (multiplied by 10 if the reported magnitude was measured in the short exposure frame), and a flag if the star was removed during our procedure for statistical field star subtraction. Summary of data reduction and assessment of photometric accuracy: Cosmic ray rejection, correction for the y-dependent CTE effect (Holtzman et al. 1995a), geometric distortion correction, and bad pixel flagging were applied to the images before performing photometry. For the photometry, we used version 2.5 of DoPHOT, modified by Eric Deutsch to handle floating-point images. We found that there were insufficient numbers of bright, isolated stars in the PC frames for producing aperture corrections. Aperture corrections as a function of position in the frame were instead derived using WFPC2 point spread functions kindly provided by Peter Stetson. As these artificially generated aperture corrections agree well with ones derived from isolated stars in the WF chips, we trust that they are reliable to better than 0.05 mag. In agreement with the report of Whitmore & Heyer (1997), we found an offset in mean magnitudes between the short- and long-exposure photometry. We corrected for this effect by adjusting the short-exposure magnitudes to match, on average, those of the long exposures. Finally, we merged the short- and long- exposure lists of photometry as described above and transformed the magnitudes from the WFPC2 system to Johnson V/Kron-Cousins I, applying the Holtzman et al. (1995PASP..107.1065H) zero points. Statistical field star subtraction was performed using color-magnitude diagrams of the field stars produced from the combined WF frames. Completeness and random and systematic errors in the photometry were extensively modelled through artificial star tests. Crowding causes the completeness to be a strong function of position in the frame, with detection being most difficult near the cluster centers. In addition, we found that crowding introduces systematic errors in the photometry, generally <0.05 mag, that depend on the V-I and V of the star. Fortunately, these errors are well-understood. However, unknown errors in the zero points may persist at the ~0.05 mag level. (6 data files).
Heavy and Light Quarks with Lattice Chiral Fermions
NASA Astrophysics Data System (ADS)
Liu, K. F.; Dong, S. J.
The feasibility of using lattice chiral fermions which are free of O(a) errors for both the heavy and light quarks is examined. The fact that the effective quark propagators in these fermions have the same form as that in the continuum with the quark mass being only an additive parameter to a chirally symmetric anti-Hermitian Dirac operator is highlighted. This implies that there is no distinction between the heavy and light quarks and no mass dependent tuning of the action or operators as long as the discretization error O(m2a2) is negligible. Using the overlap fermion, we find that the O(m2a2) (and O(ma2)) errors in the dispersion relations of the pseudoscalar and vector mesons and the renormalization of the axial-vector current and scalar density are small. This suggests that the applicable range of ma may be extended to ~0.56 with only 5% error, which is a factor of ~2.4 larger than the corresponding range of the improved Wilson action. We show that the generalized Gell-Mann-Oakes-Renner relation with unequal masses can be utilized to determine the finite ma corrections in the renormalization of the matrix elements for the heavy-light decay constants and semileptonic decay constants of the B/D meson.
Conical Probe Calibration and Wind Tunnel Data Analysis of the Channeled Centerbody Inlet Experiment
NASA Technical Reports Server (NTRS)
Truong, Samson Siu
2011-01-01
For a multi-hole test probe undergoing wind tunnel tests, the resulting data needs to be analyzed for any significant trends. These trends include relating the pressure distributions, the geometric orientation, and the local velocity vector to one another. However, experimental runs always involve some sort of error. As a result, a calibration procedure is required to compensate for this error. For this case, it is the misalignment bias angles resulting from the distortion associated with the angularity of the test probe or the local velocity vector. Through a series of calibration steps presented here, the angular biases are determined and removed from the data sets. By removing the misalignment, smoother pressure distributions contribute to more accurate experimental results, which in turn could be then compared to theoretical and actual in-flight results to derive any similarities. Error analyses will also be performed to verify the accuracy of the calibration error reduction. The resulting calibrated data will be implemented into an in-flight RTF script that will output critical flight parameters during future CCIE experimental test runs. All of these tasks are associated with and in contribution to NASA Dryden Flight Research Center s F-15B Research Testbed s Small Business Innovation Research of the Channeled Centerbody Inlet Experiment.
Mutation-adapted U1 snRNA corrects a splicing error of the dopa decarboxylase gene.
Lee, Ni-Chung; Lee, Yu-May; Chen, Pin-Wen; Byrne, Barry J; Hwu, Wuh-Liang
2016-12-01
Aromatic l-amino acid decarboxylase (AADC) deficiency is an inborn error of monoamine neurotransmitter synthesis, which results in dopamine, serotonin, epinephrine and norepinephrine deficiencies. The DDC gene founder mutation IVS6 + 4A > T is highly prevalent in Chinese patients with AADC deficiency. In this study, we designed several U1 snRNA vectors to adapt U1 snRNA binding sequences of the mutated DDC gene. We found that only the modified U1 snRNA (IVS-AAA) that completely matched both the intronic and exonic U1 binding sequences of the mutated DDC gene could correct splicing errors of either the mutated human DDC minigene or the mouse artificial splicing construct in vitro. We further injected an adeno-associated viral (AAV) vector to express IVS-AAA in the brain of a knock-in mouse model. This treatment was well tolerated and improved both the survival and brain dopamine and serotonin levels of mice with AADC deficiency. Therefore, mutation-adapted U1 snRNA gene therapy can be a promising method to treat genetic diseases caused by splicing errors, but the efficiency of such a treatment still needs improvements. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Thermodynamics of relation-based systems with applications in econophysics, sociophysics, and music
NASA Astrophysics Data System (ADS)
Gündüz, Güngör
2012-10-01
A methodology was developed to analyze relation-based systems evolving in time by using the fundamental concepts of thermodynamics. The behavior of such systems can be tracked from the scattering matrix which is actually a network of directed vectors (or pathways) connecting subsequent values, which characterize an event, such as the index values in stock markets. A system behaves in a rigid (elastic) way to an external effect and resists permanent deformation, or it behaves in a viscous (or soft) way and deforms in an irreversible way. It was shown in the past that a formula derived using the slope of paths gives a measure about the extent of viscoelastic behavior of relation-based systems Gündüz (2009) [5] Gündüz and Gündüz (2010) [6]. In this research the ‘work’ associated with ‘elastic’ component, and ‘heat’ associated with ‘viscous’ component were discussed and elaborated. In a simple two subsequent pathway system in a scattering diagram the first vector represents ‘the cause’ and the second ‘the effect’. By using work and heat energy relations that involve force and also storage and loss modulus terms, respectively, one can calculate the energy involved in relation-based systems. The modulus values can be found from the parallel and vertical components of the second vector with respect to the first vector. Once work-like and heat-like terms were determined the internal energy is also easily found from their summation. The parallel and vertical components can also be used to calculate the magnitude of torque and torque energy in the system. Three cases, (i) the behavior of the NASDAQ-100 index, (ii) a social revolt, and (iii) the structure of a melody were analyzed for their ‘work-like’, ‘heat-like’, and ‘torque-like’ energies in the course of their evolution. NASDAQ-100 exhibits highly dissipative behavior, and its work terms are very small but heat terms are of large magnitude. Its internal energy highly fluctuates in time. In the social revolt studied work and heat terms are of comparable magnitude. The melody depicts highly organized structure, and usually has larger work terms than heat terms, but at some intervals heat terms burst out and attain very large magnitudes. Torque terms reach high values when the system is recovering from a minimum value.
NASA Astrophysics Data System (ADS)
Pang, Hongfeng; Zhu, XueJun; Pan, Mengchun; Zhang, Qi; Wan, Chengbiao; Luo, Shitu; Chen, Dixiang; Chen, Jinfei; Li, Ji; Lv, Yunxiao
2016-12-01
Misalignment error is one key factor influencing the measurement accuracy of geomagnetic vector measurement system, which should be calibrated with the difficulties that sensors measure different physical information and coordinates are invisible. A new misalignment calibration method by rotating a parallelepiped frame is proposed. Simulation and experiment result show the effectiveness of calibration method. The experimental system mainly contains DM-050 three-axis fluxgate magnetometer, INS (inertia navigation system), aluminium parallelepiped frame, aluminium plane base. Misalignment angles are calculated by measured data of magnetometer and INS after rotating the aluminium parallelepiped frame on aluminium plane base. After calibration, RMS error of geomagnetic north, vertical and east are reduced from 349.441 nT, 392.530 nT and 562.316 nT to 40.130 nT, 91.586 nT and 141.989 nT respectively.
Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model
NASA Astrophysics Data System (ADS)
Yu, Lean; Wang, Shouyang; Lai, K. K.
Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.
Polarization locked vector solitons and axis instability in optical fiber.
Cundiff, Steven T.; Collings, Brandon C.; Bergman, Keren
2000-09-01
We experimentally observe polarization-locked vector solitons in optical fiber. Polarization locked-vector solitons use nonlinearity to preserve their polarization state despite the presence of birefringence. To achieve conditions where the delicate balance between nonlinearity and birefringence can survive, we studied the polarization evolution of the pulses circulating in a laser constructed entirely of optical fiber. We observe two distinct states with fixed polarization. This first state occurs for very small values birefringence and is elliptically polarized. We measure the relative phase between orthogonal components along the two principal axes to be +/-pi/2. The relative amplitude varies linearly with the magnitude of the birefringence. This state is a polarization locked vector soliton. The second, linearly polarized, state occurs for larger values of birefringence. The second state is due to the fast axis instability. We provide complete characterization of these states, and present a physical explanation of both of these states and the stability of the polarization locked vector solitons. (c) 2000 American Institute of Physics.
Polarization locked vector solitons and axis instability in optical fiber
NASA Astrophysics Data System (ADS)
Cundiff, Steven T.; Collings, Brandon C.; Bergman, Keren
2000-09-01
We experimentally observe polarization-locked vector solitons in optical fiber. Polarization locked-vector solitons use nonlinearity to preserve their polarization state despite the presence of birefringence. To achieve conditions where the delicate balance between nonlinearity and birefringence can survive, we studied the polarization evolution of the pulses circulating in a laser constructed entirely of optical fiber. We observe two distinct states with fixed polarization. This first state occurs for very small values birefringence and is elliptically polarized. We measure the relative phase between orthogonal components along the two principal axes to be ±π/2. The relative amplitude varies linearly with the magnitude of the birefringence. This state is a polarization locked vector soliton. The second, linearly polarized, state occurs for larger values of birefringence. The second state is due to the fast axis instability. We provide complete characterization of these states, and present a physical explanation of both of these states and the stability of the polarization locked vector solitons.
Kallert, Sandra M.; Darbre, Stephanie; Bonilla, Weldy V.; Kreutzfeldt, Mario; Page, Nicolas; Müller, Philipp; Kreuzaler, Matthias; Lu, Min; Favre, Stéphanie; Kreppel, Florian; Löhning, Max; Luther, Sanjiv A.; Zippelius, Alfred; Merkler, Doron; Pinschewer, Daniel D.
2017-01-01
Viral infections lead to alarmin release and elicit potent cytotoxic effector T lymphocyte (CTLeff) responses. Conversely, the induction of protective tumour-specific CTLeff and their recruitment into the tumour remain challenging tasks. Here we show that lymphocytic choriomeningitis virus (LCMV) can be engineered to serve as a replication competent, stably-attenuated immunotherapy vector (artLCMV). artLCMV delivers tumour-associated antigens to dendritic cells for efficient CTL priming. Unlike replication-deficient vectors, artLCMV targets also lymphoid tissue stroma cells expressing the alarmin interleukin-33. By triggering interleukin-33 signals, artLCMV elicits CTLeff responses of higher magnitude and functionality than those induced by replication-deficient vectors. Superior anti-tumour efficacy of artLCMV immunotherapy depends on interleukin-33 signalling, and a massive CTLeff influx triggers an inflammatory conversion of the tumour microenvironment. Our observations suggest that replicating viral delivery systems can release alarmins for improved anti-tumour efficacy. These mechanistic insights may outweigh safety concerns around replicating viral vectors in cancer immunotherapy. PMID:28548102
Moshirfar, Majid; McCaughey, Michael V; Santiago-Caban, Luis
2015-01-01
Postoperative residual refractive error following cataract surgery is not an uncommon occurrence for a large proportion of modern-day patients. Residual refractive errors can be broadly classified into 3 main categories: myopic, hyperopic, and astigmatic. The degree to which a residual refractive error adversely affects a patient is dependent on the magnitude of the error, as well as the specific type of intraocular lens the patient possesses. There are a variety of strategies for resolving residual refractive errors that must be individualized for each specific patient scenario. In this review, the authors discuss contemporary methods for rectification of residual refractive error, along with their respective indications/contraindications, and efficacies. PMID:25663845
Moshirfar, Majid; McCaughey, Michael V; Santiago-Caban, Luis
2014-12-01
Postoperative residual refractive error following cataract surgery is not an uncommon occurrence for a large proportion of modern-day patients. Residual refractive errors can be broadly classified into 3 main categories: myopic, hyperopic, and astigmatic. The degree to which a residual refractive error adversely affects a patient is dependent on the magnitude of the error, as well as the specific type of intraocular lens the patient possesses. There are a variety of strategies for resolving residual refractive errors that must be individualized for each specific patient scenario. In this review, the authors discuss contemporary methods for rectification of residual refractive error, along with their respective indications/contraindications, and efficacies.
Measurement error in environmental epidemiology and the shape of exposure-response curves.
Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E
2011-09-01
Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.
Direct discretization of planar div-curl problems
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.
1989-01-01
A control volume method is proposed for planar div-curl systems. The method is independent of potential and least squares formulations, and works directly with the div-curl system. The novelty of the technique lies in its use of a single local vector field component and two control volumes rather than the other way around. A discrete vector field theory comes quite naturally from this idea and is developed. Error estimates are proved for the method, and other ramifications investigated.
Vectorial magnetometry with the magneto-optic Kerr effect applied to Co/Cu/Co trilayer structures
NASA Astrophysics Data System (ADS)
Daboo, C.; Bland, J. A. C.; Hicken, R. J.; Ives, A. J. R.; Baird, M. J.; Walker, M. J.
1993-05-01
We describe an arrangement in which the magnetization components parallel and perpendicular to the applied field are both determined from longitudinal magneto-optic Kerr effect measurements. This arrangement differs from the usual procedures in that the same optical geometry is used but the magnet geometry altered. This leads to two magneto-optic signals which are directly comparable in magnitude thereby giving the in-plane magnetization vector directly. We show that it is of great value to study both in-plane magnetization vector components when studying coupled structures where significant anisotropies are also present. We discuss simulations which show that it is possible to accurately determine the coupling strength in such structures by examining the behavior of the component of magnetization perpendicular to the applied field in the vicinity of the hard in-plane anisotropy axis. We illustrate this technique by examining the magnetization and magnetic anisotropy behavior of ultrathin Co/Cu(111)/Co (dCu=20 Å and 27 Å) trilayer structures prepared by molecular beam epitaxy, in which coherent rotation of the magnetization vector is observed when the magnetic field B is applied along the hard in-plane anisotropy axis, with the magnitude of the magnetization vector constant and close to its bulk value. Results of micromagnetic calculations closely reproduce the observed parallel and perpendicular magnetization loops, and yield strong uniaxial magnetic anisotropies in both layers, while the interlayer coupling appears to be absent or negligible in comparison with the anisotropy strengths.
Bobrova, E V; Bogacheva, I N; Lyakhovetskii, V A; Fabinskaja, A A; Fomina, E V
2017-01-01
In order to test the hypothesis of hemisphere specialization for different types of information coding (the right hemisphere, for positional coding; the left one, for vector coding), we analyzed the errors of right and left-handers during a task involving the memorization of sequences of movements by the left or the right hand, which activates vector coding by changing the order of movements in memorized sequences. The task was first performed by the right or the left hand, then by the opposite hand. It was found that both'right- and left-handers use the information about the previous movements of the dominant hand, but not of the non-dom" inant one. After changing the hand, right-handers use the information about previous movements of the second hand, while left-handers do not. We compared our results with the data of previous experiments, in which positional coding was activated, and concluded that both right- and left-handers use vector coding for memorizing the sequences of their dominant hands and positional coding for memorizing the sequences of non-dominant hand. No similar patterns of errors were found between right- and left-handers after changing the hand, which suggests that in right- and left-handersthe skills are transferred in different ways depending on the type of coding.
NASA Astrophysics Data System (ADS)
Ma, Hongliang; Xu, Shijie
2014-09-01
This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.
Angular motion estimation using dynamic models in a gyro-free inertial measurement unit.
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters.
Angular Motion Estimation Using Dynamic Models in a Gyro-Free Inertial Measurement Unit
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters. PMID:22778586
Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C
2016-10-13
Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.
The values of the parameters of some multilayer distributed RC null networks
NASA Technical Reports Server (NTRS)
Huelsman, L. P.; Raghunath, S.
1974-01-01
In this correspondence, the values of the parameters of some multilayer distributed RC notch networks are determined, and the usually accepted values are shown to be in error. The magnitude of the error is illustrated by graphs of the frequency response of the networks.
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
Bezodis, Neil E; North, Jamie S; Razavet, Jane L
2017-09-01
A more horizontally oriented ground reaction force vector is related to higher levels of sprint acceleration performance across a range of athletes. However, the effects of acute experimental alterations to the force vector orientation within athletes are unknown. Fifteen male team sports athletes completed maximal effort 10-m accelerations in three conditions following different verbal instructions intended to manipulate the force vector orientation. Ground reaction forces (GRFs) were collected from the step nearest 5-m and stance leg kinematics at touchdown were also analysed to understand specific kinematic features of touchdown technique which may influence the consequent force vector orientation. Magnitude-based inferences were used to compare findings between conditions. There was a likely more horizontally oriented ground reaction force vector and a likely lower peak vertical force in the control condition compared with the experimental conditions. 10-m sprint time was very likely quickest in the control condition which confirmed the importance of force vector orientation for acceleration performance on a within-athlete basis. The stance leg kinematics revealed that a more horizontally oriented force vector during stance was preceded at touchdown by a likely more dorsiflexed ankle, a likely more flexed knee, and a possibly or likely greater hip extension velocity.
Errors introduced by dose scaling for relative dosimetry
Watanabe, Yoichi; Hayashi, Naoki
2012-01-01
Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658
Martella, Andrea; Matjusaitis, Mantas; Auxillos, Jamie; Pollard, Steven M; Cai, Yizhi
2017-07-21
Mammalian plasmid expression vectors are critical reagents underpinning many facets of research across biology, biomedical research, and the biotechnology industry. Traditional cloning methods often require laborious manual design and assembly of plasmids using tailored sequential cloning steps. This process can be protracted, complicated, expensive, and error-prone. New tools and strategies that facilitate the efficient design and production of bespoke vectors would help relieve a current bottleneck for researchers. To address this, we have developed an extensible mammalian modular assembly kit (EMMA). This enables rapid and efficient modular assembly of mammalian expression vectors in a one-tube, one-step golden-gate cloning reaction, using a standardized library of compatible genetic parts. The high modularity, flexibility, and extensibility of EMMA provide a simple method for the production of functionally diverse mammalian expression vectors. We demonstrate the value of this toolkit by constructing and validating a range of representative vectors, such as transient and stable expression vectors (transposon based vectors), targeting vectors, inducible systems, polycistronic expression cassettes, fusion proteins, and fluorescent reporters. The method also supports simple assembly combinatorial libraries and hierarchical assembly for production of larger multigenetic cargos. In summary, EMMA is compatible with automated production, and novel genetic parts can be easily incorporated, providing new opportunities for mammalian synthetic biology.
T-ray relevant frequencies for osteosarcoma classification
NASA Astrophysics Data System (ADS)
Withayachumnankul, W.; Ferguson, B.; Rainsford, T.; Findlay, D.; Mickan, S. P.; Abbott, D.
2006-01-01
We investigate the classification of the T-ray response of normal human bone cells and human osteosarcoma cells, grown in culture. Given the magnitude and phase responses within a reliable spectral range as features for input vectors, a trained support vector machine can correctly classify the two cell types to some extent. Performance of the support vector machine is deteriorated by the curse of dimensionality, resulting from the comparatively large number of features in the input vectors. Feature subset selection methods are used to select only an optimal number of relevant features for inputs. As a result, an improvement in generalization performance is attainable, and the selected frequencies can be used for further describing different mechanisms of the cells, responding to T-rays. We demonstrate a consistent classification accuracy of 89.6%, while the only one fifth of the original features are retained in the data set.
Theory of high-resolution tunneling spin transport on a magnetic skyrmion
NASA Astrophysics Data System (ADS)
Palotás, Krisztián; Rózsa, Levente; Szunyogh, László
2018-05-01
Tunneling spin transport characteristics of a magnetic skyrmion are described theoretically in magnetic scanning tunneling microscopy (STM). The spin-polarized charge current in STM (SP-STM) and tunneling spin transport vector quantities, the longitudinal spin current and the spin transfer torque, are calculated in high spatial resolution within the same theoretical framework. A connection between the conventional charge current SP-STM image contrasts and the magnitudes of the spin transport vectors is demonstrated that enables the estimation of tunneling spin transport properties based on experimentally measured SP-STM images. A considerable tunability of the spin transport vectors by the involved spin polarizations is also highlighted. These possibilities and the combined theory of tunneling charge and vector spin transport pave the way for gaining deep insight into electric-current-induced tunneling spin transport properties in SP-STM and to the related dynamics of complex magnetic textures at surfaces.
Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M
2009-10-15
Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.