NASA Astrophysics Data System (ADS)
Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.
2017-03-01
To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.
A new method for distortion magnetic field compensation of a geomagnetic vector measurement system
NASA Astrophysics Data System (ADS)
Liu, Zhongyan; Pan, Mengchun; Tang, Ying; Zhang, Qi; Geng, Yunling; Wan, Chengbiao; Chen, Dixiang; Tian, Wugang
2016-12-01
The geomagnetic vector measurement system mainly consists of three-axis magnetometer and an INS (inertial navigation system), which have many ferromagnetic parts on them. The magnetometer is always distorted by ferromagnetic parts and other electric equipments such as INS and power circuit module within the system, which can lead to geomagnetic vector measurement error of thousands of nT. Thus, the geomagnetic vector measurement system has to be compensated in order to guarantee the measurement accuracy. In this paper, a new distortion magnetic field compensation method is proposed, in which a permanent magnet with different relative positions is used to change the ambient magnetic field to construct equations of the error model parameters, and the parameters can be accurately estimated by solving linear equations. In order to verify effectiveness of the proposed method, the experiment is conducted, and the results demonstrate that, after compensation, the components errors of measured geomagnetic field are reduced significantly. It demonstrates that the proposed method can effectively improve the accuracy of the geomagnetic vector measurement system.
Tracking and disturbance rejection of MIMO nonlinear systems with PI controller
NASA Technical Reports Server (NTRS)
Desoer, C. A.; Lin, C. A.
1985-01-01
The tracking and disturbance rejection of a class of MIMO nonlinear systems with a linear proportional plus integral (PI) compensator is studied. Roughly speaking, it is shown that if the given nonlinear plant is exponentially stable and has a strictly increasing dc steady-state I/O map, then a simple PI compensator can be used to yield a stable unity-feedback closed-loop system which asymptotically tracks reference inputs that tend to constant vectors and asymptotically rejects disturbances that tend to constant vectors.
Tracking and disturbance rejection of MIMO nonlinear systems with PI controller
NASA Technical Reports Server (NTRS)
Desoer, C. A.; Lin, C.-A.
1985-01-01
The tracking and disturbance rejection of a class of MIMO nonlinear systems with linear proportional plus integral (PI) compensator is studied. Roughly speaking, it is shown that if the given nonlinear plant is exponentially stable and has a strictly increasing dc steady-state I/O map, then a simple PI compensator can be used to yield a stable unity-feedback closed-loop system which asymptotically tracks reference inputs that tend to constant vectors and asymptotically rejects disturbances that tend to constant vectors.
Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties
NASA Astrophysics Data System (ADS)
Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui
2017-10-01
In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
Probe compensation in cylindrical near-field scanning: A novel simulation methodology
NASA Technical Reports Server (NTRS)
Hussein, Ziad A.; Rahmat-Samii, Yahya
1993-01-01
Probe pattern compensation is essential in near-field scanning geometry, where there is a great need to accurately know far-field patterns at wide angular range. This paper focuses on a novel formulation and computer simulation to determine the precise need for and effect of probe compensation in cylindrical near-field scanning. The methodology is applied to a linear test array antenna and the NASA scatterometer radar antenna. The formulation is based on representing the probe by its equivalent tangential magnetic currents. The interaction between the probe equivalent aperture currents and the test antenna fields is obtained with the application of a reciprocity theorem. This allows us to obtain the probe vector output pickup integral which is proportional to the amplitude and phase of the electric field induced in the probe aperture with respect to its position to the test antenna. The integral is evaluated for each probe position on the required sampling point on a cylindrical near-field surface enclosing the antenna. The use of a hypothetical circular-aperture probe with a different radius permits us to derive closed-form expressions for its far-field radiation patterns. These results, together with the probe vector output pickup, allow us to perform computer simulated synthetic measurements. The far-field patterns of the test antenna are formulated based on cylindrical wave expansions of both the probe and test antenna fields. In the limit as the probe radius becomes very small, the probe vector output is the direct response of the near-field at a point, and no probe compensation is needed. Useful results are generated to compare the far-field pattern of the test antenna constructed from the knowledge of the simulated near-field with and without probe pattern compensation and the exact results. These results are important since they clearly illustrate the angular range over which probe compensation is needed. It has been found that a probe with an aperture radius of 0.25(lambda), 0.5(lambda), and 1(lambda) needs a little probe compensation, if any, near the test antenna main beam. In addition, a probe with low directivity may provide a better signal-to-noise ratio than a highly directive one. This is evident in test antenna patterns without probe compensation at wide angles.
Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-03-16
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.
Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-01-01
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552
Rigatos, Gerasimos G
2016-06-01
It is proven that the model of the p53-mdm2 protein synthesis loop is a differentially flat one and using a diffeomorphism (change of state variables) that is proposed by differential flatness theory it is shown that the protein synthesis model can be transformed into the canonical (Brunovsky) form. This enables the design of a feedback control law that maintains the concentration of the p53 protein at the desirable levels. To estimate the non-measurable elements of the state vector describing the p53-mdm2 system dynamics, the derivative-free non-linear Kalman filter is used. Moreover, to compensate for modelling uncertainties and external disturbances that affect the p53-mdm2 system, the derivative-free non-linear Kalman filter is re-designed as a disturbance observer. The derivative-free non-linear Kalman filter consists of the Kalman filter recursion applied on the linearised equivalent of the protein synthesis model together with an inverse transformation based on differential flatness theory that enables to retrieve estimates for the state variables of the initial non-linear model. The proposed non-linear feedback control and perturbations compensation method for the p53-mdm2 system can result in more efficient chemotherapy schemes where the infusion of medication will be better administered.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Computation of optimal output-feedback compensators for linear time-invariant systems
NASA Technical Reports Server (NTRS)
Platzman, L. K.
1972-01-01
The control of linear time-invariant systems with respect to a quadratic performance criterion was considered, subject to the constraint that the control vector be a constant linear transformation of the output vector. The optimal feedback matrix, f*, was selected to optimize the expected performance, given the covariance of the initial state. It is first shown that the expected performance criterion can be expressed as the ratio of two multinomials in the element of f. This expression provides the basis for a feasible method of determining f* in the case of single-input single-output systems. A number of iterative algorithms are then proposed for the calculation of f* for multiple input-output systems. For two of these, monotone convergence is proved, but they involve the solution of nonlinear matrix equations at each iteration. Another is proposed involving the solution of Lyapunov equations at each iteration, and the gradual increase of the magnitude of a penalty function. Experience with this algorithm will be needed to determine whether or not it does, indeed, possess desirable convergence properties, and whether it can be used to determine the globally optimal f*.
Polarization-sensitive descending neurons in the locust: connecting the brain to thoracic ganglia.
Träger, Ulrike; Homberg, Uwe
2011-02-09
Many animal species, in particular insects, exploit the E-vector pattern of the blue sky for sun compass navigation. Like other insects, locusts detect dorsal polarized light via photoreceptors in a specialized dorsal rim area of the compound eye. Polarized light information is transmitted through several processing stages to the central complex, a brain area involved in the control of goal-directed orientation behavior. To investigate how polarized light information is transmitted to thoracic motor circuits, we studied the responses of locust descending neurons to polarized light. Three sets of polarization-sensitive descending neurons were characterized through intracellular recordings from axonal fibers in the neck connectives combined with single-cell dye injections. Two descending neurons from the brain, one with ipsilaterally and the second with contralaterally descending axon, are likely to bridge the gap between polarization-sensitive neurons in the brain and thoracic motor centers. In both neurons, E-vector tuning changed linearly with daytime, suggesting that they signal time-compensated spatial directions, an important prerequisite for navigation using celestial signals. The third type connects the suboesophageal ganglion with the prothoracic ganglion. It showed no evidence for time compensation in E-vector tuning and might play a role in flight stabilization and control of head movements.
Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
The solar vector magnetograph of the Okayama Astrophysical Observatory
NASA Technical Reports Server (NTRS)
Makita, M.; Hamana, S.; Nishi, K.
1985-01-01
The vector magnetograph of the Okayama Astrophysical Observatory is fed to the 65 cm solar coude telescope with a 10 m Littrow spectrograph. The polarimeter put at the telescope focus analyzes the incident polarization. Photomultipliers (PMT) at the exit of the spectrograph pick up the modulated light signals and send them to the electronic controller. The controller analyzes frequency and phase of the signal. The analyzer of the polarimeter is a combination of a single wave plate rotating at 40 Hz and a Wallaston prism. Incident linear and circular polarizations are modified at four times and twice the rotation frequency, respectively. Two compensators minimize the instrumental polarization, mainly caused by the two tilt mirrors in the optical path of the telescope. The four photomultipliers placed on the wings of the FeI 5250A line give maps of intensity, longitudinal field and transverse field. The main outputs, maps of intensity, and net linear and circular polarizations in the neighboring continuum are obtained by the other two monitor PMTs.
NASA Astrophysics Data System (ADS)
Gregorio, Fernando; Cousseau, Juan; Werner, Stefan; Riihonen, Taneli; Wichman, Risto
2011-12-01
The design of predistortion techniques for broadband multiple input multiple output-OFDM (MIMO-OFDM) systems raises several implementation challenges. First, the large bandwidth of the OFDM signal requires the introduction of memory effects in the PD model. In addition, it is usual to consider an imbalanced in-phase and quadrature (IQ) modulator to translate the predistorted baseband signal to RF. Furthermore, the coupling effects, which occur when the MIMO paths are implemented in the same reduced size chipset, cannot be avoided in MIMO transceivers structures. This study proposes a MIMO-PD system that linearizes the power amplifier response and compensates nonlinear crosstalk and IQ imbalance effects for each branch of the multiantenna system. Efficient recursive algorithms are presented to estimate the complete MIMO-PD coefficients. The algorithms avoid the high computational complexity in previous solutions based on least squares estimation. The performance of the proposed MIMO-PD structure is validated by simulations using a two-transmitter antenna MIMO system. Error vector magnitude and adjacent channel power ratio are evaluated showing significant improvement compared with conventional MIMO-PD systems.
NASA Astrophysics Data System (ADS)
Schäfer, D.; Lin, M.; Rao, P. P.; Loffroy, R.; Liapi, E.; Noordhoek, N.; Eshuis, P.; Radaelli, A.; Grass, M.; Geschwind, J.-F. H.
2012-03-01
C-arm based tomographic 3D imaging is applied in an increasing number of minimal invasive procedures. Due to the limited acquisition speed for a complete projection data set required for tomographic reconstruction, breathing motion is a potential source of artifacts. This is the case for patients who cannot comply breathing commands (e.g. due to anesthesia). Intra-scan motion estimation and compensation is required. Here, a scheme for projection based local breathing motion estimation is combined with an anatomy adapted interpolation strategy and subsequent motion compensated filtered back projection. The breathing motion vector is measured as a displacement vector on the projections of a tomographic short scan acquisition using the diaphragm as a landmark. Scaling of the displacement to the acquisition iso-center and anatomy adapted volumetric motion vector field interpolation delivers a 3D motion vector per voxel. Motion compensated filtered back projection incorporates this motion vector field in the image reconstruction process. This approach is applied in animal experiments on a flat panel C-arm system delivering improved image quality (lower artifact levels, improved tumor delineation) in 3D liver tumor imaging.
Improvement of cardiac CT reconstruction using local motion vector fields.
Schirra, Carsten Oliver; Bontus, Claas; van Stevendaal, Udo; Dössel, Olaf; Grass, Michael
2009-03-01
The motion of the heart is a major challenge for cardiac imaging using CT. A novel approach to decrease motion blur and to improve the signal to noise ratio is motion compensated reconstruction which takes motion vector fields into account in order to correct motion. The presented work deals with the determination of local motion vector fields from high contrast objects and their utilization within motion compensated filtered back projection reconstruction. Image registration is applied during the quiescent cardiac phases. Temporal interpolation in parameter space is used in order to estimate motion during strong motion phases. The resulting motion vector fields are during image reconstruction. The method is assessed using a software phantom and several clinical cases for calcium scoring. As a criterion for reconstruction quality, calcium volume scores were derived from both, gated cardiac reconstruction and motion compensated reconstruction throughout the cardiac phases using low pitch helical cone beam CT acquisitions. The presented technique is a robust method to determine and utilize local motion vector fields. Motion compensated reconstruction using the derived motion vector fields leads to superior image quality compared to gated reconstruction. As a result, the gating window can be enlarged significantly, resulting in increased SNR, while reliable Hounsfield units are achieved due to the reduced level of motion artefacts. The enlargement of the gating window can be translated into reduced dose requirements.
Three-dimensional tool radius compensation for multi-axis peripheral milling
NASA Astrophysics Data System (ADS)
Chen, Youdong; Wang, Tianmiao
2013-05-01
Few function about 3D tool radius compensation is applied to generating executable motion control commands in the existing computer numerical control (CNC) systems. Once the tool radius is changed, especially in the case of tool size changing with tool wear in machining, a new NC program has to be recreated. A generic 3D tool radius compensation method for multi-axis peripheral milling in CNC systems is presented. The offset path is calculated by offsetting the tool path along the direction of the offset vector with a given distance. The offset vector is perpendicular to both the tangent vector of the tool path and the orientation vector of the tool axis relative to the workpiece. The orientation vector equations of the tool axis relative to the workpiece are obtained through homogeneous coordinate transformation matrix and forward kinematics of generalized kinematics model of multi-axis machine tools. To avoid cutting into the corner formed by the two adjacent tool paths, the coordinates of offset path at the intersection point have been calculated according to the transition type that is determined by the angle between the two tool path tangent vectors at the corner. Through the verification by the solid cutting simulation software VERICUT® with different tool radiuses on a table-tilting type five-axis machine tool, and by the real machining experiment of machining a soup spoon on a five-axis machine tool with the developed CNC system, the effectiveness of the proposed 3D tool radius compensation method is confirmed. The proposed compensation method can be suitable for all kinds of three- to five-axis machine tools as a general form.
A multistage motion vector processing method for motion-compensated frame interpolation.
Huang, Ai- Mei; Nguyen, Truong Q
2008-05-01
In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.
Stabilization of business cycles of finance agents using nonlinear optimal control
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.
2017-11-01
Stabilization of the business cycles of interconnected finance agents is performed with the use of a new nonlinear optimal control method. First, the dynamics of the interacting finance agents and of the associated business cycles is described by a modeled of coupled nonlinear oscillators. Next, this dynamic model undergoes approximate linearization round a temporary operating point which is defined by the present value of the system's state vector and the last value of the control inputs vector that was exerted on it. The linearization procedure is based on Taylor series expansion of the dynamic model and on the computation of Jacobian matrices. The modelling error, which is due to the truncation of higher-order terms in the Taylor series expansion is considered as a disturbance which is compensated by the robustness of the control loop. Next, for the linearized model of the interacting finance agents, an H-infinity feedback controller is designed. The computation of the feedback control gain requires the solution of an algebraic Riccati equation at each iteration of the control algorithm. Through Lyapunov stability analysis it is proven that the control scheme satisfies an H-infinity tracking performance criterion, which signifies elevated robustness against modelling uncertainty and external perturbations. Moreover, under moderate conditions the global asymptotic stability features of the control loop are proven.
The Control System for the X-33 Linear Aerospike Engine
NASA Technical Reports Server (NTRS)
Jackson, Jerry E.; Espenschied, Erich; Klop, Jeffrey
1998-01-01
The linear aerospike engine is being developed for single-stage -to-orbit (SSTO) applications. The primary advantages of a linear aerospike engine over a conventional bell nozzle engine include altitude compensation, which provides enhanced performance, and lower vehicle weight resulting from the integration of the engine into the vehicle structure. A feature of this integration is the ability to provide thrust vector control (TVC) by differential throttling of the engine combustion elements, rather than the more conventional approach of gimballing the entire engine. An analysis of the X-33 flight trajectories has shown that it is necessary to provide +/- 15% roll, pitch and yaw TVC authority with an optional capability of +/- 30% pitch at select times during the mission. The TVC performance requirements for X-33 engine became a major driver in the design of the engine control system. The thrust level of the X-33 engine as well as the amount of TVC are managed by a control system which consists of electronic, instrumentation, propellant valves, electro-mechanical actuators, spark igniters, and harnesses. The engine control system is responsible for the thrust control, mixture ratio control, thrust vector control, engine health monitoring, and communication to the vehicle during all operational modes of the engine (checkout, pre-start, start, main-stage, shutdown and post shutdown). The methodology for thrust vector control, the health monitoring approach which includes failure detection, isolation, and response, and the basic control system design are the topic of this paper. As an additional point of interest a brief description of the X-33 engine system will be included in this paper.
NASA Astrophysics Data System (ADS)
de Wit, Bernard; Reys, Valentin
2017-12-01
Supergravity with eight supercharges in a four-dimensional Euclidean space is constructed at the full non-linear level by performing an off-shell time-like reduction of five-dimensional supergravity. The resulting four-dimensional theory is realized off-shell with the Weyl, vector and tensor supermultiplets and a corresponding multiplet calculus. Hypermultiplets are included as well, but they are themselves only realized with on-shell supersymmetry. We also briefly discuss the non-linear supermultiplet. The off-shell reduction leads to a full understanding of the Euclidean theory. A complete multiplet calculus is presented along the lines of the Minkowskian theory. Unlike in Minkowski space, chiral and anti-chiral multiplets are real and supersymmetric actions are generally unbounded from below. Precisely as in the Minkowski case, where one has different formulations of Poincaré supergravity upon introducing different compensating supermultiplets, one can also obtain different versions of Euclidean supergravity.
NASA Astrophysics Data System (ADS)
Pang, Hongfeng; Chen, Dixiang; Pan, Mengchun; Luo, Shitu; Zhang, Qi; Luo, Feilu
2012-02-01
Fluxgate magnetometers are widely used for magnetic field measurement. However, their accuracy is influenced by temperature. In this paper, a new method was proposed to compensate the temperature drift of fluxgate magnetometers, in which a least-squares support vector machine (LSSVM) is utilized. The compensation performance was analyzed by simulation, which shows that the LSSVM has better performance and less training time than backpropagation and radical basis function neural networks. The temperature characteristics of a DM fluxgate magnetometer were measured with a temperature experiment box. Forty-five measured data under different magnetic fields and temperatures were obtained and divided into 36 training data and nine test data. The training data were used to obtain the parameters of the LSSVM model, and the compensation performance of the LSSVM model was verified by the test data. Experimental results show that the temperature drift of magnetometer is reduced from 109.3 to 3.3 nT after compensation, which suggests that this compensation method is effective for the accuracy improvement of fluxgate magnetometers.
Fixed order dynamic compensation for multivariable linear systems
NASA Technical Reports Server (NTRS)
Kramer, F. S.; Calise, A. J.
1986-01-01
This paper considers the design of fixed order dynamic compensators for multivariable time invariant linear systems, minimizing a linear quadratic performance cost functional. Attention is given to robustness issues in terms of multivariable frequency domain specifications. An output feedback formulation is adopted by suitably augmenting the system description to include the compensator states. Either a controller or observer canonical form is imposed on the compensator description to reduce the number of free parameters to its minimal number. The internal structure of the compensator is prespecified by assigning a set of ascending feedback invariant indices, thus forming a Brunovsky structure for the nominal compensator.
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
A nonlinear H-infinity approach to optimal control of the depth of anaesthesia
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Rigatou, Efthymia; Zervos, Nikolaos
2016-12-01
Controlling the level of anaesthesia is important for improving the success rate of surgeries and for reducing the risks to which operated patients are exposed. This paper proposes a nonlinear H-infinity approach to optimal control of the level of anaesthesia. The dynamic model of the anaesthesia, which describes the concentration of the anaesthetic drug in different parts of the body, is subjected to linearization at local operating points. These are defined at each iteration of the control algorithm and consist of the present value of the system's state vector and of the last control input that was exerted on it. For this linearization Taylor series expansion is performed and the system's Jacobian matrices are computed. For the linearized model an H-infinity controller is designed. The feedback control gains are found by solving at each iteration of the control algorithm an algebraic Riccati equation. The modelling errors due to this approximate linearization are considered as disturbances which are compensated by the robustness of the control loop. The stability of the control loop is confirmed through Lyapunov analysis.
Faraday rotation measurement method and apparatus
NASA Technical Reports Server (NTRS)
Brockman, M. H. (Inventor)
1981-01-01
A method and device for measuring Faraday rotation of a received RF signal is described. A simultaneous orthogonal polarization receiver compensates for a 3 db loss due to splitting of a received signal into left circular and right circular polarization channels. The compensation is achieved by RF and modulation arraying utilizing a specific receiver array which also detects and measures Faraday rotation in the presence or absence of spin stabilization effects on a linear polarization vector. Either up-link or down-link measurement of Faraday rotation is possible. Specifically, the Faraday measurement apparatus utilized in conjunction with the specific receiver array provides a means for comparing the phase of a reference signal in the receiver array to the phase of a tracking loop signal related to the incoming signal, and comparing the phase of the reference signal to the phase of the tracking signal shifted in phase by 90 degrees. The averaged and unaveraged signals, are compared, the phase changes between the two signals being related to Faraday rotation.
NASA Astrophysics Data System (ADS)
Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup
2016-06-01
Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.
NASA Astrophysics Data System (ADS)
Bougherara, Salim; Golea, Amar; Benchouia, M. Toufik
2018-05-01
This paper is addressed to a comparative study of the vector control of a three phase induction motor based on two mathematical models. The first one is the conventional model based on the assumptions that the saturation and the iron losses are neglected; the second model fully accounts for both the fundamental iron loss and main flux saturation with and without compensation. A rotor resistance identifier is developed, so the compensation of its variation is achieved. The induction motor should be fed through a three levels inverter. The simulation results show the performances of the vector control based on the both models.
Moving object localization using optical flow for pedestrian detection from a moving vehicle.
Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun
2014-01-01
This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.
A nonlinear optimal control approach for chaotic finance dynamics
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.
2017-11-01
A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.
Compensation of Horizontal Gravity Disturbances for High Precision Inertial Navigation
Cao, Juliang; Wu, Meiping; Lian, Junxiang; Cai, Shaokun; Wang, Lin
2018-01-01
Horizontal gravity disturbances are an important factor that affects the accuracy of inertial navigation systems in long-duration ship navigation. In this paper, from the perspective of the coordinate system and vector calculation, the effects of horizontal gravity disturbance on the initial alignment and navigation calculation are simultaneously analyzed. Horizontal gravity disturbances cause the navigation coordinate frame built in initial alignment to not be consistent with the navigation coordinate frame in which the navigation calculation is implemented. The mismatching of coordinate frame violates the vector calculation law, which will have an adverse effect on the precision of the inertial navigation system. To address this issue, two compensation methods suitable for two different navigation coordinate frames are proposed, one of the methods implements the compensation in velocity calculation, and the other does the compensation in attitude calculation. Finally, simulations and ship navigation experiments confirm the effectiveness of the proposed methods. PMID:29562653
Polarization Catastrophe Contributing to Rotation and Tornadic Motion in Cumulo-Nimbus Clouds
NASA Astrophysics Data System (ADS)
Handel, P. H.
2007-05-01
When the concentration of sub-micron ice particles in a cloud exceeds 2.5E21 per cubic cm, divided by the squared average number of water molecules per crystallite, the polarization catastrophe occurs. Then all ice crystallites nucleated on aerosol dust particles align their dipole moments in the same direction, and a large polarization vector field is generated in the cloud. Often this vector field has a radial component directed away from the vertical axis of the cloud. It is induced by the pre-existing electric field caused by the charged screening layers at the cloud surface, the screening shell of the cloud. The presence of a vertical component of the magnetic field of the earth creates a density of linear momentum G=DxB in the azimuthal direction, where D=eE+P is the electric displacement vector and e is the vacuum permittivity. This linear momentum density yields an angular momentum density vector directed upward in the nordic hemisphere, if the polarization vector points away from the vertical axis of the cloud. When the cloud becomes colloidally unstable, the crystallites grow beyond the size limit at which they still could carry a large ferroelectric saturation dipole moment, and the polarization vector quickly disappears. Then the cloud begins to rotate with an angular momentum that has the same direction. Due to the large average number of water molecules in a crystallite, the polarization catastrophe (PC) is present in practically all clouds, and is compensated by masking charges. In cumulo-nimbus (thunder-) clouds the collapse of the PC is rapid, and the masking charges lead to lightning, and in the upper atmosphere also to sprites, elves, and blue jets. In stratus clouds, however, the collapse is slow, and only leads to reverse polarity in dissipating clouds (minus on the bottom), as compared with growing clouds (plus on the bottom, because of the excess polarization charge). References: P.H. Handel: "Polarization Catastrophe Theory of Cloud Electricity", J. Geophysical Research 90, 5857-5863 (1985). P.H. Handel and P.B. James: "Polarization Catastrophe Model of Static Electrification and Spokes in the B-Ring of Saturn", Geophys. Res. Lett. 10, 1-4 (1983).
State-Dependent Pseudo-Linear Filter for Spacecraft Attitude and Rate Estimation
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2001-01-01
This paper presents the development and performance of a special algorithm for estimating the attitude and angular rate of a spacecraft. The algorithm is a pseudo-linear Kalman filter, which is an ordinary linear Kalman filter that operates on a linear model whose matrices are current state estimate dependent. The nonlinear rotational dynamics equation of the spacecraft is presented in the state space as a state-dependent linear system. Two types of measurements are considered. One type is a measurement of the quaternion of rotation, which is obtained from a newly introduced star tracker based apparatus. The other type of measurement is that of vectors, which permits the use of a variety of vector measuring sensors like sun sensors and magnetometers. While quaternion measurements are related linearly to the state vector, vector measurements constitute a nonlinear function of the state vector. Therefore, in this paper, a state-dependent linear measurement equation is developed for the vector measurement case. The state-dependent pseudo linear filter is applied to simulated spacecraft rotations and adequate estimates of the spacecraft attitude and rate are obtained for the case of quaternion measurements as well as of vector measurements.
Foulger, G.R.; Julian, B.R.; Hill, D.P.; Pitt, A.M.; Malin, P.E.; Shalev, E.
2004-01-01
Most of 26 small (0.4??? M ???3.1) microearthquakes at Long Valley caldera in mid-1997, analyzed using data from a dense temporary network of 69 digital three-component seismometers, have significantly non-double-couple focal mechanisms, inconsistent with simple shear faulting. We determined their mechanisms by inverting P - and S -wave polarities and amplitude ratios using linear-programming methods, and tracing rays through a three-dimensional Earth model derived using tomography. More than 80% of the mechanisms have positive (volume increase) isotropic components and most have compensated linear-vector dipole components with outward-directed major dipoles. The simplest interpretation of these mechanisms is combined shear and extensional faulting with a volume-compensating process, such as rapid flow of water, steam, or CO2 into opening tensile cracks. Source orientations of earthquakes in the south moat suggest extensional faulting on ESE-striking subvertical planes, an orientation consistent with planes defined by earthquake hypocenters. The focal mechanisms show that clearly defined hypocentral planes in different locations result from different source processes. One such plane in the eastern south moat is consistent with extensional faulting, while one near Casa Diablo Hot Springs reflects en echelon right-lateral shear faulting. Source orientations at Mammoth Mountain vary systematically with location, indicating that the volcano influences the local stress field. Events in a 'spasmodic burst' at Mammoth Mountain have practically identical mechanisms that indicate nearly pure compensated tensile failure and high fluid mobility. Five earthquakes had mechanisms involving small volume decreases, but these may not be significant. No mechanisms have volumetric moment fractions larger than that of a force dipole, but the reason for this fact is unknown. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Issues in the digital implementation of control compensators. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Moroney, P.
1979-01-01
Techniques developed for the finite-precision implementation of digital filters were used, adapted, and extended for digital feedback compensators, with particular emphasis on steady state, linear-quadratic-Gaussian compensators. Topics covered include: (1) the linear-quadratic-Gaussian problem; (2) compensator structures; (3) architectural issues: serialism, parallelism, and pipelining; (4) finite wordlength effects: quantization noise, quantizing the coefficients, and limit cycles; and (5) the optimization of structures.
A robust H.264/AVC video watermarking scheme with drift compensation.
Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.
A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation
Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376
Thyra Abstract Interface Package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe A.
2005-09-01
Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilities to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Codemore » also currently exists for testing objects and providing composite objects such as product vectors.« less
Shu, Deming; Kearney, Steven P.; Preissner, Curt A.
2015-02-17
A method and deformation compensated flexural pivots structured for precision linear nanopositioning stages are provided. A deformation-compensated flexural linear guiding mechanism includes a basic parallel mechanism including a U-shaped member and a pair of parallel bars linked to respective pairs of I-link bars and each of the I-bars coupled by a respective pair of flexural pivots. The basic parallel mechanism includes substantially evenly distributed flexural pivots minimizing center shift dynamic errors.
Linear models to perform treaty verification tasks for enhanced information security
MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; ...
2016-11-12
Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less
Linear models to perform treaty verification tasks for enhanced information security
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.
Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less
Linear models to perform treaty verification tasks for enhanced information security
NASA Astrophysics Data System (ADS)
MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; Hilton, Nathan R.; Marleau, Peter A.
2017-02-01
Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.
Unified control/structure design and modeling research
NASA Technical Reports Server (NTRS)
Mingori, D. L.; Gibson, J. S.; Blelloch, P. A.; Adamian, A.
1986-01-01
To demonstrate the applicability of the control theory for distributed systems to large flexible space structures, research was focused on a model of a space antenna which consists of a rigid hub, flexible ribs, and a mesh reflecting surface. The space antenna model used is discussed along with the finite element approximation of the distributed model. The basic control problem is to design an optimal or near-optimal compensator to suppress the linear vibrations and rigid-body displacements of the structure. The application of an infinite dimensional Linear Quadratic Gaussian (LQG) control theory to flexible structure is discussed. Two basic approaches for robustness enhancement were investigated: loop transfer recovery and sensitivity optimization. A third approach synthesized from elements of these two basic approaches is currently under development. The control driven finite element approximation of flexible structures is discussed. Three sets of finite element basic vectors for computing functional control gains are compared. The possibility of constructing a finite element scheme to approximate the infinite dimensional Hamiltonian system directly, instead of indirectly is discussed.
Adaptive Failure Compensation for Aircraft Flight Control Using Engine Differentials: Regulation
NASA Technical Reports Server (NTRS)
Yu, Liu; Xidong, Tang; Gang, Tao; Joshi, Suresh M.
2005-01-01
The problem of using engine thrust differentials to compensate for rudder and aileron failures in aircraft flight control is addressed in this paper in a new framework. A nonlinear aircraft model that incorporates engine di erentials in the dynamic equations is employed and linearized to describe the aircraft s longitudinal and lateral motion. In this model two engine thrusts of an aircraft can be adjusted independently so as to provide the control flexibility for rudder or aileron failure compensation. A direct adaptive compensation scheme for asymptotic regulation is developed to handle uncertain actuator failures in the linearized system. A design condition is specified to characterize the system redundancy needed for failure compensation. The adaptive regulation control scheme is applied to the linearized model of a large transport aircraft in which the longitudinal and lateral motions are coupled as the result of using engine thrust differentials. Simulation results are presented to demonstrate the effectiveness of the adaptive compensation scheme.
An H-infinity approach to optimal control of oxygen and carbon dioxide contents in blood
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Selisteanu, Dan; Precup, Radu
2016-12-01
Nonlinear H-infinity control is proposed for the regulation of the levels of oxygen and carbon dioxide in the blood of patients undergoing heart surgery and extracorporeal blood circulation. The levels of blood gases are administered through a membrane oxygenator and the control inputs are the externally supplied oxygen, the aggregate gas supply (oxygen plus nitrogen), and the blood flow which is regulated by a blood pump. The proposed control method is based on linearization of the oxygenator's dynamical model through Taylor series expansion and the computation of Jacobian matrices. The local linearization points are defined by the present value of the oxygenator's state vector and the last value of the control input that was exerted on this system. The modelling errors due to linearization are considered as disturbances which are compensated by the robustness of the control loop. Next, for the linearized model of the oxygenator an H-infinity control input is computed at each iteration of the control algorithm through the solution of an algebraic Riccati equation. With the use of Lyapunov stability analysis it is demonstrated that the control scheme satisfies the H-infinity tracking performance criterion, which signifies improved robustness against modelling uncertainty and external disturbances. Moreover, under moderate conditions the asymptotic stability of the control loop is also proven.
Homotopy approach to optimal, linear quadratic, fixed architecture compensation
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1991-01-01
Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1979-01-01
In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.
Basáñez, María-Gloria; Razali, Karina; Renz, Alfons; Kelly, David
2007-03-01
The proportion of vector blood meals taken on humans (the human blood index, h) appears as a squared term in classical expressions of the basic reproduction ratio (R(0)) for vector-borne infections. Consequently, R(0) varies non-linearly with h. Estimates of h, however, constitute mere snapshots of a parameter that is predicted, from evolutionary theory, to vary with vector and host abundance. We test this prediction using a population dynamics model of river blindness assuming that, before initiation of vector control or chemotherapy, recorded measures of vector density and human infection accurately represent endemic equilibrium. We obtain values of h that satisfy the condition that the effective reproduction ratio (R(e)) must equal 1 at equilibrium. Values of h thus obtained decrease with vector density, decrease with the vector:human ratio and make R(0) respond non-linearly rather than increase linearly with vector density. We conclude that if vectors are less able to obtain human blood meals as their density increases, antivectorial measures may not lead to proportional reductions in R(0) until very low vector levels are achieved. Density dependence in the contact rate of infectious diseases transmitted by insects may be an important non-linear process with implications for their epidemiology and control.
Next Generation Robots for STEM Education andResearch at Huston Tillotson University
2017-11-10
dynamics through the following command: roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion : After...understood the system’s natural dynamics. roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion ...is created using the following command: roslaunch mtb_lab6_feedback_linearization gravity_inversion.launch Gravity inversion is just one
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-01-01
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-12-18
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.
Electromagnetic energy flux vector for a dispersive linear medium.
Crenshaw, Michael E; Akozbek, Neset
2006-05-01
The electromagnetic energy flux vector in a dispersive linear medium is derived from energy conservation and microscopic quantum electrodynamics and is found to be of the Umov form as the product of an electromagnetic energy density and a velocity vector.
An Algorithm for Converting Static Earth Sensor Measurements into Earth Observation Vectors
NASA Technical Reports Server (NTRS)
Harman, R.; Hashmall, Joseph A.; Sedlak, Joseph
2004-01-01
An algorithm has been developed that converts penetration angles reported by Static Earth Sensors (SESs) into Earth observation vectors. This algorithm allows compensation for variation in the horizon height including that caused by Earth oblateness. It also allows pitch and roll to be computed using any number (greater than 1) of simultaneous sensor penetration angles simplifying processing during periods of Sun and Moon interference. The algorithm computes body frame unit vectors through each SES cluster. It also computes GCI vectors from the spacecraft to the position on the Earth's limb where each cluster detects the Earth's limb. These body frame vectors are used as sensor observation vectors and the GCI vectors are used as reference vectors in an attitude solution. The attitude, with the unobservable yaw discarded, is iteratively refined to provide the Earth observation vector solution.
Computerized method to compensate for breathing body motion in dynamic chest radiographs
NASA Astrophysics Data System (ADS)
Matsuda, H.; Tanaka, R.; Sanada, S.
2017-03-01
Dynamic chest radiography combined with computer analysis allows quantitative analyses on pulmonary function and rib motion. The accuracy of kinematic analysis is directly linked to diagnostic accuracy, and thus body motion compensation is a major concern. Our purpose in this study was to develop a computerized method to reduce a breathing body motion in dynamic chest radiographs. Dynamic chest radiographs of 56 patients were obtained using a dynamic flat-panel detector. The images were divided into a 1 cm-square and the squares on body counter were used to detect the body motion. Velocity vector was measured using cross-correlation method on the body counter and the body motion was then determined on the basis of the summation of motion vector. The body motion was then compensated by shifting the images based on the measured vector. By using our method, the body motion was accurately detected by the order of a few pixels in clinical cases, mean 82.5% in right and left directions. In addition, our method detected slight body motion which was not able to be identified by human observations. We confirmed our method effectively worked in kinetic analysis of rib motion. The present method would be useful for the reduction of a breathing body motion in dynamic chest radiography.
Lundell, Henrik; Alexander, Daniel C; Dyrby, Tim B
2014-08-01
Stimulated echo acquisition mode (STEAM) diffusion MRI can be advantageous over pulsed-gradient spin-echo (PGSE) for diffusion times that are long compared with T2 . It therefore has potential for biomedical diffusion imaging applications at 7T and above where T2 is short. However, gradient pulses other than the diffusion gradients in the STEAM sequence contribute much greater diffusion weighting than in PGSE and lead to a disrupted experimental design. Here, we introduce a simple compensation to the STEAM acquisition that avoids the orientational bias and disrupted experiment design that these gradient pulses can otherwise produce. The compensation is simple to implement by adjusting the gradient vectors in the diffusion pulses of the STEAM sequence, so that the net effective gradient vector including contributions from diffusion and other gradient pulses is as the experiment intends. High angular resolution diffusion imaging (HARDI) data were acquired with and without the proposed compensation. The data were processed to derive standard diffusion tensor imaging (DTI) maps, which highlight the need for the compensation. Ignoring the other gradient pulses, a bias in DTI parameters from STEAM acquisition is found, due both to confounds in the analysis and the experiment design. Retrospectively correcting the analysis with a calculation of the full B matrix can partly correct for these confounds, but an acquisition that is compensated as proposed is needed to remove the effect entirely. © 2014 The Authors. NMR in Biomedicine published by John Wiley & Sons, Ltd.
Polynomial compensation, inversion, and approximation of discrete time linear systems
NASA Technical Reports Server (NTRS)
Baram, Yoram
1987-01-01
The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.
Mitsouras, Dimitris; Mulkern, Robert V; Rybicki, Frank J
2008-08-01
A recently developed method for exact density compensation of non uniformly arranged samples relies on the analytically known cross-correlations of Fourier basis functions corresponding to the traced k-space trajectory. This method produces a linear system whose solution represents compensated samples that normalize the contribution of each independent element of information that can be expressed by the underlying trajectory. Unfortunately, linear system-based density compensation approaches quickly become computationally demanding with increasing number of samples (i.e., image resolution). Here, it is shown that when a trajectory is composed of rotationally symmetric interleaves, such as spiral and PROPELLER trajectories, this cross-correlations method leads to a highly simplified system of equations. Specifically, it is shown that the system matrix is circulant block-Toeplitz so that the linear system is easily block-diagonalized. The method is described and demonstrated for 32-way interleaved spiral trajectories designed for 256 image matrices; samples are compensated non iteratively in a few seconds by solving the small independent block-diagonalized linear systems in parallel. Because the method is exact and considers all the interactions between all acquired samples, up to a 10% reduction in reconstruction error concurrently with an up to 30% increase in signal to noise ratio are achieved compared to standard density compensation methods. (c) 2008 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Broussard, John R.
1987-01-01
Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.
Polarized light use in the nocturnal bull ant, Myrmecia midas.
Freas, Cody A; Narendra, Ajay; Lemesle, Corentin; Cheng, Ken
2017-08-01
Solitary foraging ants have a navigational toolkit, which includes the use of both terrestrial and celestial visual cues, allowing individuals to successfully pilot between food sources and their nest. One such celestial cue is the polarization pattern in the overhead sky. Here, we explore the use of polarized light during outbound and inbound journeys and with different home vectors in the nocturnal bull ant, Myrmecia midas . We tested foragers on both portions of the foraging trip by rotating the overhead polarization pattern by ±45°. Both outbound and inbound foragers responded to the polarized light change, but the extent to which they responded to the rotation varied. Outbound ants, both close to and further from the nest, compensated for the change in the overhead e-vector by about half of the manipulation, suggesting that outbound ants choose a compromise heading between the celestial and terrestrial compass cues. However, ants returning home compensated for the change in the e-vector by about half of the manipulation when the remaining home vector was short (1-2 m) and by more than half of the manipulation when the remaining vector was long (more than 4 m). We report these findings and discuss why weighting on polarization cues change in different contexts.
Polarized light use in the nocturnal bull ant, Myrmecia midas
Lemesle, Corentin; Cheng, Ken
2017-01-01
Solitary foraging ants have a navigational toolkit, which includes the use of both terrestrial and celestial visual cues, allowing individuals to successfully pilot between food sources and their nest. One such celestial cue is the polarization pattern in the overhead sky. Here, we explore the use of polarized light during outbound and inbound journeys and with different home vectors in the nocturnal bull ant, Myrmecia midas. We tested foragers on both portions of the foraging trip by rotating the overhead polarization pattern by ±45°. Both outbound and inbound foragers responded to the polarized light change, but the extent to which they responded to the rotation varied. Outbound ants, both close to and further from the nest, compensated for the change in the overhead e-vector by about half of the manipulation, suggesting that outbound ants choose a compromise heading between the celestial and terrestrial compass cues. However, ants returning home compensated for the change in the e-vector by about half of the manipulation when the remaining home vector was short (1−2 m) and by more than half of the manipulation when the remaining vector was long (more than 4 m). We report these findings and discuss why weighting on polarization cues change in different contexts. PMID:28879002
Nonlinear compensation techniques for magnetic suspension systems. Ph.D. Thesis - MIT
NASA Technical Reports Server (NTRS)
Trumper, David L.
1991-01-01
In aerospace applications, magnetic suspension systems may be required to operate over large variations in air-gap. Thus the nonlinearities inherent in most types of suspensions have a significant effect. Specifically, large variations in operating point may make it difficult to design a linear controller which gives satisfactory stability and performance over a large range of operating points. One way to address this problem is through the use of nonlinear compensation techniques such as feedback linearization. Nonlinear compensators have received limited attention in the magnetic suspension literature. In recent years, progress has been made in the theory of nonlinear control systems, and in the sub-area of feedback linearization. The idea is demonstrated of feedback linearization using a second order suspension system. In the context of the second order suspension, sampling rate issues in the implementation of feedback linearization are examined through simulation.
Flatness-based embedded adaptive fuzzy control of turbocharged diesel engines
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan
2014-10-01
In this paper nonlinear embedded control for turbocharged Diesel engines is developed with the use of Differential flatness theory and adaptive fuzzy control. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances an adaptive fuzzy control scheme is implemanted making use of the transformed dynamical system of the diesel engine that is obtained through the application of differential flatness theory. Since only the system's output is measurable the complete state vector has to be reconstructed with the use of a state observer. It is shown that a suitable learning law can be defined for neuro-fuzzy approximators, which are part of the controller, so as to preserve the closed-loop system stability. With the use of Lyapunov stability analysis it is proven that the proposed observer-based adaptive fuzzy control scheme results in H∞ tracking performance.
NASA Technical Reports Server (NTRS)
Crane, Harold L.
1961-01-01
With an electric analog computer, an investigation has been made of the effects of control frictions and preloads on the transient longitudinal response of a fighter airplane during abrupt small attitude corrections. The simulation included the airplane dynamics, powered control system, feel system, and a simple linearized pseudopilot. Control frictions at the stick pivot and at the servo valve as well as preloads of the stick and valve were considered individually and in combinations. It is believed that the results which are presented in the form of time histories and vector diagrams present a more detailed illustration of the effects of stray forces and compensating forces in the longitudinal control system than has previously been available. Consistent with the results of previous studies, the present results show that any of these four friction and preload forces caused some deterioration of the response. However, even a small amount of valve friction caused an oscillatory pitching response during which the phasing of the valve friction was such that it caused energy to be fed into the pitching oscillation of the air-plane. Of the other friction and preload forces which were considered, it was found that stick preload was close to 180 deg. out of phase with valve friction and thus could compensate in large measure for valve friction as long as the cycling of the stick encompassed the trim point. Either stick friction or valve preload provided a smaller stabilizing effect primarily through a reduction in the amplitude of the resultant force vector acting on the control system. Some data were obtained on the effects of friction when the damping or inertia of the control system or the pilot lag was varied.
NASA Technical Reports Server (NTRS)
Crane, Harold L
1957-01-01
With an electric analog computer, an investigation has been made of the effects of control frictions and preloads on the transient longitudinal response of a fighter airplane during abrupt small attitude corrections. The simulation included the airplane dynamics, powered control system, feel system, and a simple linearized pseudopilot. Control frictions at the stick pivot and at the servo valve as well as preloads of the stick and valve were considered individually and in combinations. It is believed that the results which are presented in the form of time histories and vector diagrams present a more detailed illustration of the effects of stray forces and compensating forces in the longitudinal control system than has previously been available. Consistent with the results of previous studies, the present results show that any of thesefour friction and preload forces caused some deterioration of the response. However, even a small amount of valve friction caused an oscillatory pitching response during which the phasing of the valve friction was such that it caused energy to be fed into the pitching oscillation of the airplane. Of the other friction and preload forces which were considered, it was found that stick preload was close to 180 degrees out of phase with valve friction and thus could compensate in large measure for valve friction as long as the cycling of the stick encompassed the trim point. Either stick friction or valve preload provided a smaller stabilizing effect primarily through a reduction in the amplitude of the resultant force vector acting on the control system. Some data were obtained on the effects of friction when the damping or inertia of the control system or the pilot lag was varied.
A unified development of several techniques for the representation of random vectors and data sets
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1973-01-01
Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.
NASA Astrophysics Data System (ADS)
Gavazzi, Bruno; Le Maire, Pauline; Munschy, Marc; Dechamp, Aline
2017-04-01
Fluxgate 3-components magnetometer is the kind of magnetometer which offers the lightest weight and lowest power consumption for the measurement of the intensity of the magnetic field. Moreover, vector measurements make it the only kind of magnetometer allowing compensation of magnetic perturbations due to the equipment carried with it. Unfortunately, Fluxgate magnetometers are quite uncommon in near surface geophysics due to the difficulty to calibrate them precisely. The recent advances in calibration of the sensors and magnetic compensation of the devices from a simple process on the field led Institut de Physique du Globe de Strasbourg to develop instruments for georeferenced magnetic measurements at different scales - from submetric measurements on the ground to aircraft-conducted acquisition through the wide range offered by unmanned aerial vehicles (UAVs) - with a precision in the order of 1 nT. Such equipment is used for different kind of application: structural geology, pipes and UXO detection, archaeology.
NASA Astrophysics Data System (ADS)
Yihaa Roodhiyah, Lisa’; Tjong, Tiffany; Nurhasan; Sutarno, D.
2018-04-01
The late research, linear matrices of vector finite element in two dimensional(2-D) magnetotelluric (MT) responses modeling was solved by non-sparse direct solver in TE mode. Nevertheless, there is some weakness which have to be improved especially accuracy in the low frequency (10-3 Hz-10-5 Hz) which is not achieved yet and high cost computation in dense mesh. In this work, the solver which is used is sparse direct solver instead of non-sparse direct solverto overcome the weaknesses of solving linear matrices of vector finite element metod using non-sparse direct solver. Sparse direct solver will be advantageous in solving linear matrices of vector finite element method because of the matrix properties which is symmetrical and sparse. The validation of sparse direct solver in solving linear matrices of vector finite element has been done for a homogen half-space model and vertical contact model by analytical solution. Thevalidation result of sparse direct solver in solving linear matrices of vector finite element shows that sparse direct solver is more stable than non-sparse direct solver in computing linear problem of vector finite element method especially in low frequency. In the end, the accuracy of 2D MT responses modelling in low frequency (10-3 Hz-10-5 Hz) has been reached out under the efficient allocation memory of array and less computational time consuming.
Limitation on the use of the horizontal clinostat as a gravity compensator
NASA Technical Reports Server (NTRS)
Brown, A. H.; Dahl, A. O.; Chapman, D. K.
1975-01-01
If the horizontal clinostat effectively compensates for the influence of the gravity vector on the rotating plant, it makes the plant unresponsive to whatever chronic acceleration may be applied transverse to the axis of clinostat rotation. This was tested by centrifuging plants while they were growing on clinostats. For a number of morphological endpoints of development, the results depended on the magnitude of the applied g-force. Gravity compensation by the clinostat was incomplete, and this conclusion is in agreement with results of satellite experiments which are reviewed.
Nori, Francesco; Frezza, Ruggero
2005-11-01
Recent experiments on frogs and rats, have led to the hypothesis that sensory-motor systems are organized into a finite number of linearly combinable modules; each module generates a motor command that drives the system to a predefined equilibrium. Surprisingly, in spite of the infiniteness of different movements that can be realized, there seems to be only a handful of these modules. The structure can be thought of as a vocabulary of "elementary control actions". Admissible controls, which in principle belong to an infinite dimensional space, are reduced to the linear vector space spanned by these elementary controls. In the present paper we address some theoretical questions that arise naturally once a similar structure is applied to the control of nonlinear kinematic chains. First of all, we show how to choose the modules so that the system does not loose its capability of generating a "complete" set of movements. Secondly, we realize a "complete" vocabulary with a minimal number of elementary control actions. Subsequently, we show how to modify the control scheme so as to compensate for parametric changes in the system to be controlled. Remarkably, we construct a set of modules with the property of being invariant with respect to the parameters that model the growth of an individual. Robustness against uncertainties is also considered showing how to optimally choose the modules equilibria so as to compensate for errors affecting the system. Finally, the motion primitive paradigm is extended to locomotion and a related formalization of internal (proprioceptive) and external (exteroceptive) variables is given.
A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.
Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin
2015-09-16
In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.
A component compensation method for magnetic interferential field
NASA Astrophysics Data System (ADS)
Zhang, Qi; Wan, Chengbiao; Pan, Mengchun; Liu, Zhongyan; Sun, Xiaoyong
2017-04-01
A new component searching with scalar restriction method (CSSRM) is proposed for magnetometer to compensate magnetic interferential field caused by ferromagnetic material of platform and improve measurement performance. In CSSRM, the objection function for parameter estimation is to minimize magnetic field (components and magnitude) difference between its measurement value and reference value. Two scalar compensation method is compared with CSSRM and the simulation results indicate that CSSRM can estimate all interferential parameters and external magnetic field vector with high accuracy. The magnetic field magnitude and components, compensated with CSSRM, coincide with true value very well. Experiment is carried out for a tri-axial fluxgate magnetometer, mounted in a measurement system with inertial sensors together. After compensation, error standard deviation of both magnetic field components and magnitude are reduced from more than thousands nT to less than 20 nT. It suggests that CSSRM provides an effective way to improve performance of magnetic interferential field compensation.
Drake, Birger; Nádai, Béla
1970-03-01
An empirical measure of viscosity, which is often far from being a linear function of composition, was used together with refractive index to build up a function which bears a linear relationship to the composition of tomato paste-water-sucrose mixtures. The new function can be used directly for rapid composition control by linear vector-vector transformation.
Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian
2017-01-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469
Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen
2017-06-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS
2015-05-29
DOMAIN MISMATCH COMPENSATION FOR SPEAKER RECOGNITION USING A LIBRARY OF WHITENERS Elliot Singer and Douglas Reynolds Massachusetts Institute of...development data is assumed to be unavailable. The method is based on a generalization of data whitening used in association with i-vector length...normalization and utilizes a library of whitening transforms trained at system development time using strictly out-of-domain data. The approach is
NASA Astrophysics Data System (ADS)
Masaud, Tarek
Double Fed Induction Generators (DFIG) has been widely used for the past two decades in large wind farms. However, there are many open-ended problems yet to be solved before they can be implemented in some specific applications. This dissertation deals with the general analysis, modeling, control and applications of the DFIG for large wind farm applications. A detailed "d-q" model of DFIG along with other applications is simulated using the MATLAB/Simulink platform. The simulation results have been discussed in detail in both sub-synchronous and super-synchronous mode of operation. An improved vector control strategy based on the rotor flux oriented vector control has been proposed to control the active power output of the DFIG. The new vector control strategy is compared with the stator flux oriented vector control which is commonly used. It is observed that the new improved vector control method provides a better active power tracking accuracy compare with the stator flux oriented vector control. The behavior of the DFIG -based wind farm under the various grid disturbances is also studied in this dissertation. The implementation of the Flexible AC Transmission System devices (FACTS) to overcome the voltage stability issue for such applications is investigated. The study includes the implementation of both a static synchronous compensator (STATCOM), and the static VAR compensator (SVC) as dynamic reactive power compensators at the point of common coupling to support DFIG-based wind farm during disturbances. Integrating FACTS protect the grid connected DFIG-based wind farm from going offline during and after the disturbances. It is found that the both devices improve the transient performance and therefore helps the wind turbine generator system to remain in service during grid faults. A comparison between the performance of the two devices in terms of the amount of reactive power injected, time response and the application cost has been discussed in this dissertation. Finally, the integration of the battery energy storage system (BESS) into a grid connected DFIG- based wind turbine as a proposed solution to smooth out the output power during wind speed variations is also addressed.
Material decomposition in an arbitrary number of dimensions using noise compensating projection
NASA Astrophysics Data System (ADS)
O'Donnell, Thomas; Halaweish, Ahmed; Cormode, David; Cheheltani, Rabee; Fayad, Zahi A.; Mani, Venkatesh
2017-03-01
Purpose: Multi-energy CT (e.g., dual energy or photon counting) facilitates the identification of certain compounds via data decomposition. However, the standard approach to decomposition (i.e., solving a system of linear equations) fails if - due to noise - a pixel's vector of HU values falls outside the boundary of values describing possible pure or mixed basis materials. Typically, this is addressed by either throwing away those pixels or projecting them onto the closest point on this boundary. However, when acquiring four (or more) energy volumes, the space bounded by three (or more) materials that may be found in the human body (either naturally or through injection) can be quite small. Noise may significantly limit the number of those pixels to be included within. Therefore, projection onto the boundary becomes an important option. But, projection in higher than 3 dimensional space is not possible with standard vector algebra: the cross-product is not defined. Methods: We describe a technique which employs Clifford Algebra to perform projection in an arbitrary number of dimensions. Clifford Algebra describes a manipulation of vectors that incorporates the concepts of addition, subtraction, multiplication, and division. Thereby, vectors may be operated on like scalars forming a true algebra. Results: We tested our approach on a phantom containing inserts of calcium, gadolinium, iodine, gold nanoparticles and mixtures of pairs thereof. Images were acquired on a prototype photon counting CT scanner under a range of threshold combinations. Comparison of the accuracy of different threshold combinations versus ground truth are presented. Conclusions: Material decomposition is possible with three or more materials and four or more energy thresholds using Clifford Algebra projection to mitigate noise.
Limitation on the Use of the Horizontal Clinostat as a Gravity Compensator 123
Brown, Allan H.; Dahl, A. O.; Chapman, D. K.
1976-01-01
If the horizontal clinostat effectively compensates for the influence of the gravity vector on the rotating plant, it should make the plant unresponsive to whatever chronic acceleration may be applied transverse to the axis of clinostat rotation. This was tested by centrifuging plants while they were growing on clinostats. For a number of morphological end-points of development the results depended on the magnitude of the applied g-force. Therefore, gravity compensation by the clinostat was incomplete. This conclusion is in agreement with results of satellite experiments which are reviewed. PMID:16659631
Linear Magnetochiral effect in Weyl Semimetals
NASA Astrophysics Data System (ADS)
Cortijo, Alberto
We describe the presence of a linear magnetochiral effect in time reversal breaking Weyl semimetals. The magnetochiral effect consists in a simultaneous linear dependence of the magnetotransport coefficients with the magnetic field and a momentum vector. This simultaneous dependence is allowed by the Onsager reciprocity relations, being the separation vector between the Weyl nodes the vector that plays such role. This linear magnetochiral effect constitutes a new transport effect associated to the topological structures linked to time reversal breaking Weyl semimetals. European Union structural funds and the Comunidad de Madrid MAD2D-CM Program (S2013/MIT-3007) and MINECO (Spain) Grant No. FIS2015-73454-JIN.
Integrated Power and Attitude Control for a Spacecraft with Flywheels and Control Moment Gyroscopes
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Karlgaard, Christopher D.; Kumar, Renjith R.; Bose, David M.
2003-01-01
A law is designed for simultaneous control of the orientation of an Earth-pointing spacecraft, the energy stored by counter-rotating flywheels, and the angular momentum of the flywheels and control moment gyroscopes used together as all integrated set of actuators for attitude control. General. nonlinear equations of motion are presented in vector-dyadic form, and used to obtain approximate expressions which are then linearized in preparation for design of control laws that include feedback of flywheel kinetic energy error as it means of compensating for damping exerted by rotor bearings. Two flywheel 'steering laws' are developed such that torque commanded by all attitude control law is achieved while energy is stored or discharged at the required rate. Using the International Space Station as an example, numerical simulations are performed to demonstrate control about a torque equilibrium attitude and illustrate the benefits of kinetic energy error feedback.
Non-linear dynamic compensation system
NASA Technical Reports Server (NTRS)
Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)
1992-01-01
A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.
Error compensation for hybrid-computer solution of linear differential equations
NASA Technical Reports Server (NTRS)
Kemp, N. H.
1970-01-01
Z-transform technique compensates for digital transport delay and digital-to-analog hold. Method determines best values for compensation constants in multi-step and Taylor series projections. Technique also provides hybrid-calculation error compared to continuous exact solution, plus system stability properties.
Linear positioning laser calibration setup of CNC machine tools
NASA Astrophysics Data System (ADS)
Sui, Xiulin; Yang, Congjing
2002-10-01
The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.
Lin, Nan; Wei, Min
2014-01-01
After vestibular labyrinth injury, behavioral deficits partially recover through the process of vestibular compensation. The present study was performed to improve our understanding of the physiology of the macaque vestibular system in the compensated state (>7 wk) after unilateral labyrinthectomy (UL). Three groups of vestibular nucleus neurons were included: pre-UL control neurons, neurons ipsilateral to the lesion, and neurons contralateral to the lesion. The firing responses of neurons sensitive to linear acceleration in the horizontal plane were recorded during sinusoidal horizontal translation directed along six different orientations (30° apart) at 0.5 Hz and 0.2 g peak acceleration (196 cm/s2). This data defined the vector of best response for each neuron in the horizontal plane, along which sensitivity, symmetry, detection threshold, and variability of firing were determined. Additionally, the responses of the same cells to translation over a series of frequencies (0.25–5.0 Hz) either in the interaural or naso-occipital orientation were obtained to define the frequency response characteristics in each group. We found a decrease in sensitivity, increase in threshold, and alteration in orientation of best responses in the vestibular nuclei after UL. Additionally, the phase relationship of the best neural response to translational stimulation changed with UL. The symmetry of individual neuron responses in the excitatory and inhibitory directions was unchanged by UL. Bilateral central utricular neurons still demonstrated two-dimension tuning after UL, consistent with spatio-temporal convergence from a single vestibular end-organ. These neuronal data correlate with known behavioral deficits after unilateral vestibular compromise. PMID:24717349
Analysis and experiments for delay compensation in attitude control of flexible spacecraft
NASA Astrophysics Data System (ADS)
Sabatini, Marco; Palmerini, Giovanni B.; Leonangeli, Nazareno; Gasbarri, Paolo
2014-11-01
Space vehicles are often characterized by highly flexible appendages, with low natural frequencies which can generate coupling phenomena during orbital maneuvering. The stability and delay margins of the controlled system are deeply affected by the presence of bodies with different elastic properties, assembled to form a complex multibody system. As a consequence, unstable behavior can arise. In this paper the problem is first faced from a numerical point of view, developing accurate multibody mathematical models, as well as relevant navigation and control algorithms. One of the main causes of instability is identified with the unavoidable presence of time delays in the GNC loop. A strategy to compensate for these delays is elaborated and tested using the simulation tool, and finally validated by means of a free floating platform, replicating the flexible spacecraft attitude dynamics (single axis rotation). The platform is equipped with thrusters commanded according to the on-off modulation of the Linear Quadratic Regulator (LQR) control law. The LQR is based on the estimate of the full state vector, i.e. including both rigid - attitude - and elastic variables, that is possible thanks to the on line measurement of the flexible displacements, realized by processing the images acquired by a dedicated camera. The accurate mathematical model of the system and the rigid and elastic measurements enable a prediction of the state, so that the control is evaluated taking the predicted state relevant to a delayed time into account. Both the simulations and the experimental campaign demonstrate that by compensating in this way the time delay, the instability is eliminated, and the maneuver is performed accurately.
Vector optical fields with bipolar symmetry of linear polarization.
Pan, Yue; Li, Yongnan; Li, Si-Min; Ren, Zhi-Cheng; Si, Yu; Tu, Chenghou; Wang, Hui-Tian
2013-09-15
We focus on a new kind of vector optical field with bipolar symmetry of linear polarization instead of cylindrical and elliptical symmetries, enriching members of family of vector optical fields. We design theoretically and generate experimentally the demanded vector optical fields and then explore some novel tightly focusing properties. The geometric configurations of states of polarization provide additional degrees of freedom assisting in engineering the field distribution at the focus to the specific applications such as lithography, optical trapping, and material processing.
Quantum corrections to the generalized Proca theory via a matter field
NASA Astrophysics Data System (ADS)
Amado, André; Haghani, Zahra; Mohammadi, Azadeh; Shahidi, Shahab
2017-09-01
We study the quantum corrections to the generalized Proca theory via matter loops. We consider two types of interactions, linear and nonlinear in the vector field. Calculating the one-loop correction to the vector field propagator, three- and four-point functions, we show that the non-linear interactions are harmless, although they renormalize the theory. The linear matter-vector field interactions introduce ghost degrees of freedom to the generalized Proca theory. Treating the theory as an effective theory, we calculate the energy scale up to which the theory remains healthy.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H. Lee; Ganti, Anand; Resnick, David R
2013-10-22
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Design, decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-06-17
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-11-18
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
NASA Technical Reports Server (NTRS)
Kuznetsov, Stephen; Marriott, Darin
2008-01-01
Advances in ultra high speed linear induction electromagnetic launchers over the past decade have focused on magnetic compensation of the exit and entry-edge transient flux wave to produce efficient and compact linear electric machinery. The paper discusses two approaches to edge compensation in long-stator induction catapults with typical end speeds of 150 to 1,500 m/s. In classical linear induction machines, the exit-edge effect is manifest as two auxiliary traveling waves that produce a magnetic drag on the projectile and a loss of magnetic flux over the main surface of the machine. In the new design for the Stator Compensated Induction Machine (SCIM) high velocity launcher, the exit-edge effect is nulled by a dual wavelength machine or alternately the airgap flux is peaked at a location prior to the exit edge. A four (4) stage LIM catapult is presently being constructed for 180 m/s end speed operation using double-sided longitudinal flux machines. Advanced exit and entry edge compensation is being used to maximize system efficiency, and minimize stray heating of the reaction armature. Each stage will output approximately 60 kN of force and produce over 500 G s of acceleration on the armature. The advantage of this design is there is no ablation to the projectile and no sliding contacts, allowing repeated firing of the launcher without maintenance of any sort. The paper shows results of a parametric study for 500 m/s and 1,500 m/s linear induction launchers incorporating two of the latest compensation techniques for an air-core stator primary and an iron-core primary winding. Typical thrust densities for these machines are in the range of 150 kN/sq.m. to 225 kN/sq.m. and these compete favorably with permanent magnet linear synchronous machines. The operational advantages of the high speed SCIM launcher are shown by eliminating the need for pole-angle position sensors as would be required by synchronous systems. The stator power factor is also improved.
A linearization time-domain CMOS smart temperature sensor using a curvature compensation oscillator.
Chen, Chun-Chi; Chen, Hao-Wen
2013-08-28
This paper presents an area-efficient time-domain CMOS smart temperature sensor using a curvature compensation oscillator for linearity enhancement with a -40 to 120 °C temperature range operability. The inverter-based smart temperature sensors can substantially reduce the cost and circuit complexity of integrated temperature sensors. However, a large curvature exists on the temperature-to-time transfer curve of the inverter-based delay line and results in poor linearity of the sensor output. For cost reduction and error improvement, a temperature-to-pulse generator composed of a ring oscillator and a time amplifier was used to generate a thermal sensing pulse with a sufficient width proportional to the absolute temperature (PTAT). Then, a simple but effective on-chip curvature compensation oscillator is proposed to simultaneously count and compensate the PTAT pulse with curvature for linearization. With such a simple structure, the proposed sensor possesses an extremely small area of 0.07 mm2 in a TSMC 0.35-mm CMOS 2P4M digital process. By using an oscillator-based scheme design, the proposed sensor achieves a fine resolution of 0.045 °C without significantly increasing the circuit area. With the curvature compensation, the inaccuracy of -1.2 to 0.2 °C is achieved in an operation range of -40 to 120 °C after two-point calibration for 14 packaged chips. The power consumption is measured as 23 mW at a sample rate of 10 samples/s.
Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation
Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu
2015-01-01
To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401
Recent Developments In Theory Of Balanced Linear Systems
NASA Technical Reports Server (NTRS)
Gawronski, Wodek
1994-01-01
Report presents theoretical study of some issues of controllability and observability of system represented by linear, time-invariant mathematical model of the form. x = Ax + Bu, y = Cx + Du, x(0) = xo where x is n-dimensional vector representing state of system; u is p-dimensional vector representing control input to system; y is q-dimensional vector representing output of system; n,p, and q are integers; x(0) is intial (zero-time) state vector; and set of matrices (A,B,C,D) said to constitute state-space representation of system.
Basic linear algebra subprograms for FORTRAN usage
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Hanson, R. J.; Kincaid, D. R.; Krogh, F. T.
1977-01-01
A package of 38 low level subprograms for many of the basic operations of numerical linear algebra is presented. The package is intended to be used with FORTRAN. The operations in the package are dot products, elementary vector operations, Givens transformations, vector copy and swap, vector norms, vector scaling, and the indices of components of largest magnitude. The subprograms and a test driver are available in portable FORTRAN. Versions of the subprograms are also provided in assembly language for the IBM 360/67, the CDC 6600 and CDC 7600, and the Univac 1108.
The Stability Region for Feedback Control of the Wake Behind Twin Oscillating Cylinders
NASA Astrophysics Data System (ADS)
Borggaard, Jeff; Gugercin, Serkan; Zietsman, Lizette
2016-11-01
Linear feedback control has the ability to stabilize vortex shedding behind twin cylinders where cylinder rotation is the actuation mechanism. Complete elimination of the wake is only possible for certain Reynolds numbers and cylinder spacing. This is related to the presence of asymmetric unstable modes in the linearized system. We investigate this region of parameter space using a number of closed-loop simulations that bound this region. We then consider the practical issue of designing feedback controls based on limited state measurements by building a nonlinear compensator using linear robust control theory with and incorporating the nonlinear terms in the compensator (e.g., using the extended Kalman filter). Interpolatory model reduction methods are applied to the large discretized, linearized Navier-Stokes system and used for computing the control laws and compensators. Preliminary closed-loop simulations of a three-dimensional version of this problem will also be presented. Supported in part by the National Science Foundation.
The linear combination of vectors implies the existence of the cross and dot products
NASA Astrophysics Data System (ADS)
Pujol, Jose
2018-07-01
Given two vectors u and v, their cross product u × v is a vector perpendicular to u and v. The motivation for this property, however, is never addressed. Here we show that the existence of the cross and dot products and the perpendicularity property follow from the concept of linear combination, which does not involve products of vectors. For our proof we consider the plane generated by a linear combination of uand v. When looking for the coefficients in the linear combination required to reach a desired point on the plane, the solution involves the existence of a normal vector n = u × v. Our results have a bearing on the history of vector analysis, as a product similar to the cross product but without the perpendicularity requirement existed at the same time. These competing products originate in the work of two major nineteen-century mathematicians, W. Hamilton, and H. Grassmann. These historical aspects are discussed in some detail here. We also address certain aspects of the teaching of u × v to undergraduate students, which is known to carry some difficulties. This includes the algebraic and geometric denitions of u × v, the rule for the direction of u × v, and the pseudovectorial nature of u × v.
Chen, Benyong; Cheng, Liang; Yan, Liping; Zhang, Enzheng; Lou, Yingtian
2017-03-01
The laser beam drift seriously influences the accuracy of straightness or displacement measurement in laser interferometers, especially for the long travel measurement. To solve this problem, a heterodyne straightness and displacement measuring interferometer with laser beam drift compensation is proposed. In this interferometer, the simultaneous measurement of straightness error and displacement is realized by using heterodyne interferometry, and the laser beam drift is determined to compensate the measurement results of straightness error and displacement in real time. The optical configuration of the interferometer is designed. The principle of the simultaneous measurement of straightness, displacement, and laser beam drift is depicted and analyzed in detail. And the compensation of the laser beam drift for the straightness error and displacement is presented. Several experiments were performed to verify the feasibility of the interferometer and the effectiveness of the laser beam drift compensation. The experiments of laser beam stability show that the position stability of the laser beam spot can be improved by more than 50% after compensation. The measurement and compensation experiments of straightness error and displacement by testing a linear stage at different distances show that the straightness and displacement obtained from the interferometer are in agreement with those obtained from a compared interferometer and the measured stage. These demonstrate that the merits of this interferometer are not only eliminating the influence of laser beam drift on the measurement accuracy but also having the abilities of simultaneous measurement of straightness error and displacement as well as being suitable for long-travel linear stage metrology.
An inherent curvature-compensated voltage reference using non-linearity of gate coupling coefficient
NASA Astrophysics Data System (ADS)
Hande, Vinayak; Shojaei Baghini, Maryam
2015-08-01
A novel current-mode voltage reference circuit which is capable of generating sub-1 V output voltage is presented. The proposed architecture exhibits the inherent curvature compensation ability. The curvature compensation is achieved by utilizing the non-linear behavior of gate coupling coefficient to compensate non-linear temperature dependence of base-emitter voltage. We have also utilized the developments in CMOS process to reduce power and area consumption. The proposed voltage reference is analyzed theoretically and compared with other existing methods. The circuit is designed and simulated in 180 nm mixed-mode CMOS UMC technology which gives a reference level of 246 mV. The minimum required supply voltage is 1 V with maximum current drawn of 9.24 μA. A temperature coefficient of 9 ppm/°C is achieved over -25 to 125 °C temperature range. The reference voltage varies by ±11 mV across process corners. The reference circuit shows the line sensitivity of 0.9 mV/V with area consumption of 100 × 110 μm2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim
2013-03-15
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less
Threshold raw retrieved contrast in coronagraphs is limited by internal polarization
NASA Astrophysics Data System (ADS)
Breckinridge, James
The objective of this work is to provide the exoplanet program with an accurate model of the coronagraph complex point spread function, methods to correct chromatic aberration in the presence of polarization aberrations, device requirements to minimize and compensate for these aberrations at levels needed for exoplanet coronagraphy, and exoplanet retrieval algorithms in the presence of polarizaiton aberrations. Currently, space based coronagraphs are designed and performance analyzed using scalar wave aberration theory. Breckinridge, Lam & Chipman (2015) PASP 127: 445-468 and Breckinridge & Oppenheimer (2004) ApJ 600: 1091-1098 showed that astronomical telescopes designed for exoplanet and precision astrometric science require polarization or vector-wave analysis. Internal instrument polarization limits both threshold raw contrast and measurements of the vector wave properties of the electromagnetic radiation from stars, exoplanets, gas and dust. The threshold raw contrast obtained using only scalar wave theory is much more optimistic than that obtained using the more hardware-realistic vector wave theory. Internal polarization reduces system contrast, increases scattered light, alters radiometric measurements, distorts diffraction-limited star images and reduces signal-to-noise ratio. For example, a vector-wave analysis shows that the WFIRST-CGI instrument will have a threshold raw contrast of 10-7 not the 10-8 forecasted using the scalar wave analysis given in the WFIRST-CGI 2015 report. The physical nature of the complex point spread function determines the exoplanet scientific yield of coronagraphs. We propose to use the Polaris-M polarization aberration ray-tracing software developed at the College of Optical Science of the University of Arizona to ray trace both a "typical" exoplanet coronagraph system as well as the WFIRST-CGI system. Threshold raw contrast and the field across the complex PSF will be calculated as a function of optical device vector E&M requirements on: 1. Lyot coronagraph mask and stop size, configuration, location and composition, 2. Uniformity of the complex reflectance of the highly reflecting metal mirrors with their dielectric overcoats, and 3. Opto-mechanical layout. Once these requirements are developed polarization aberration mitigation studies can begin to identify a practical solution to compensate polarization errors, not unlike the more developed technology of A/O compensates for pointing and manufacturing errors. Several methods to compensate for chromatic aberration in coronagraphs further compounds the complex PSF errors that require compensation to maximize the best retrieved raw contrast in the presence of exoplanets in the vicinity of stars. Internal instrument polarization introduces partial coherence into the wavefront to distort the speckle-pattern complex-field in the dark hole. An additional factor that determines retrieved raw contrast is our ability to effectively process the polarizationdistorted field within the dark hole. This study is essential to the correct calculation of exoplanet coronagraph science yield, development of requirements on subsystem devices (mirrors, stops, masks, spectrometers, wavefront error mitigation optics and opto-mechanical layout) and the development of exoplanet retrieval algorithms.
Reaction wheel low-speed compensation using a dither signal
NASA Astrophysics Data System (ADS)
Stetson, John B., Jr.
1993-08-01
A method for improving low-speed reaction wheel performance on a three-axis controlled spacecraft is presented. The method combines a constant amplitude offset with an unbiased, oscillating dither to harmonically linearize rolling solid friction dynamics. The complete, nonlinear rolling solid friction dynamics using an analytic modification to the experimentally verified Dahl solid friction model were analyzed using the dual-input describing function method to assess the benefits of dither compensation. The modified analytic solid friction model was experimentally verified with a small dc servomotor actuated reaction wheel assembly. Using dither compensation abrupt static friction disturbances are eliminated and near linear behavior through zero rate can be achieved. Simulated vehicle response to a wheel rate reversal shows that when the dither and offset compensation is used, elastic modes are not significantly excited, and the uncompensated attitude error reduces by 34:1.
NASA Astrophysics Data System (ADS)
Wu, Peilin; Zhang, Qunying; Fei, Chunjiao; Fang, Guangyou
2017-04-01
Aeromagnetic gradients are typically measured by optically pumped magnetometers mounted on an aircraft. Any aircraft, particularly helicopters, produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is essential, and least square (LS) is the conventional method used for reducing interference levels. However, the LSs approach to solving the aeromagnetic interference model has a few difficulties, one of which is in handling multicollinearity. Therefore, we propose an aeromagnetic gradient compensation method, specifically targeted for helicopter use but applicable on any airborne platform, which is based on the ɛ-support vector regression algorithm. The structural risk minimization criterion intrinsic to the method avoids multicollinearity altogether. Local aeromagnetic anomalies can be retained, and platform-generated fields are suppressed simultaneously by constructing an appropriate loss function and kernel function. The method was tested using an unmanned helicopter and obtained improvement ratios of 12.7 and 3.5 in the vertical and horizontal gradient data, respectively. Both of these values are probably better than those that would have been obtained from the conventional method applied to the same data, had it been possible to do so in a suitable comparative context. The validity of the proposed method is demonstrated by the experimental result.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
NASA Technical Reports Server (NTRS)
Schulz, G.
1977-01-01
The theory of output vector feedback (a few measured quantities) is used to derive completely active oscillation isolation functions for helicopters. These feedback controller concepts are tested with various versions of the BO 105 helicopter and their performance is demonstrated. A compensation of the vibrational excitations from the rotor and harmonics of the number of blades are considered. There is also a fast and automatic trim function for maneuvers.
[Vestibular compensation studies]. [Vestibular Compensation and Morphological Studies
NASA Technical Reports Server (NTRS)
Perachio, Adrian A. (Principal Investigator)
1996-01-01
The following topics are reported: neurophysiological studies on MVN neurons during vestibular compensation; effects of spinal cord lesions on VNC neurons during compensation; a closed-loop vestibular compensation model for horizontally canal-related MVN neurons; spatiotemporal convergence in VNC neurons; contributions of irregularly firing vestibular afferents to linear and angular VOR's; application to flight studies; metabolic measures in vestibular neurons; immediate early gene expression following vestibular stimulation; morphological studies on primary afferents, central vestibular pathways, vestibular efferent projection to the vestibular end organs, and three-dimensional morphometry and imaging.
Development of a NEW Vector Magnetograph at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
West, Edward; Hagyard, Mona; Gary, Allen; Smith, James; Adams, Mitzi; Rose, M. Franklin (Technical Monitor)
2001-01-01
This paper will describe the Experimental Vector Magnetograph that has been developed at the Marshall Space Flight Center (MSFC). This instrument was designed to improve linear polarization measurements by replacing electro-optic and rotating waveplate modulators with a rotating linear analyzer. Our paper will describe the motivation for developing this magnetograph, compare this instrument with traditional magnetograph designs, and present a comparison of the data acquired by this instrument and original MSFC vector magnetograph.
Pulse Vector-Excitation Speech Encoder
NASA Technical Reports Server (NTRS)
Davidson, Grant; Gersho, Allen
1989-01-01
Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok
1990-01-01
The implementation and verification of the delay-compensation algorithm are addressed. The delay compensator has been experimentally verified at an IEEE 802.4 network testbed for velocity control of a DC servomotor. The performance of the delay-compensation algorithm was also examined by combined discrete-event and continuous-time simulation of the flight control system of an advanced aircraft that uses the SAE (Society of Automotive Engineers) linear token passing bus for data communications.
Advanced linear and nonlinear compensations for 16QAM SC-400G unrepeatered transmission system
NASA Astrophysics Data System (ADS)
Zhang, Junwen; Yu, Jianjun; Chien, Hung-Chang
2018-02-01
Digital signal processing (DSP) with both linear equalization and nonlinear compensations are studied in this paper for the single-carrier 400G system based on 65-GBaud 16-quadrature amplitude modulation (QAM) signals. The 16-QAM signals are generated and pre-processed with pre-equalization (Pre-EQ) and Look-up-Table (LUT) based pre-distortion (Pre-DT) at the transmitter (Tx)-side. The implementation principle of training-based equalization and pre-distortion are presented here in this paper with experimental studies. At the receiver (Rx)-side, fiber-nonlinearity compensation based on digital backward propagation (DBP) are also utilized to further improve the transmission performances. With joint LUT-based Pre-DT and DBP-based post-compensation to mitigate the opto-electronic components and fiber nonlinearity impairments, we demonstrate the unrepeatered transmission of 1.6Tb/s based on 4-lane 400G single-carrier PDM-16QAM over 205-km SSMF without distributed amplifier.
Ghost instabilities of cosmological models with vector fields nonminimally coupled to the curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Himmetoglu, Burak; Peloso, Marco; Contaldi, Carlo R.
2009-12-15
We prove that many cosmological models characterized by vectors nonminimally coupled to the curvature (such as the Turner-Widrow mechanism for the production of magnetic fields during inflation, and models of vector inflation or vector curvaton) contain ghosts. The ghosts are associated with the longitudinal vector polarization present in these models and are found from studying the sign of the eigenvalues of the kinetic matrix for the physical perturbations. Ghosts introduce two main problems: (1) they make the theories ill defined at the quantum level in the high energy/subhorizon regime (and create serious problems for finding a well-behaved UV completion), andmore » (2) they create an instability already at the linearized level. This happens because the eigenvalue corresponding to the ghost crosses zero during the cosmological evolution. At this point the linearized equations for the perturbations become singular (we show that this happens for all the models mentioned above). We explicitly solve the equations in the simplest cases of a vector without a vacuum expectation value in a Friedmann-Robertson-Walker geometry, and of a vector with a vacuum expectation value plus a cosmological constant, and we show that indeed the solutions of the linearized equations diverge when these equations become singular.« less
Fang, Jiancheng; Wang, Tao; Quan, Wei; Yuan, Heng; Zhang, Hong; Li, Yang; Zou, Sheng
2014-06-01
A novel method to compensate the residual magnetic field for an atomic magnetometer consisting of two perpendicular beams of polarizations was demonstrated in this paper. The method can realize magnetic compensation in the case where the pumping rate of the probe beam cannot be ignored. In the experiment, the probe beam is always linearly polarized, whereas, the probe beam contains a residual circular component due to the imperfection of the polarizer, which leads to the pumping effect of the probe beam. A simulation of the probe beam's optical rotation and pumping rate was demonstrated. At the optimized points, the wavelength of the probe beam was optimized to achieve the largest optical rotation. Although, there is a small circular component in the linearly polarized probe beam, the pumping rate of the probe beam was non-negligible at the optimized wavelength which if ignored would lead to inaccuracies in the magnetic field compensation. Therefore, the dynamic equation of spin evolution was solved by considering the pumping effect of the probe beam. Based on the quasi-static solution, a novel magnetic compensation method was proposed, which contains two main steps: (1) the non-pumping compensation and (2) the sequence compensation with a very specific sequence. After these two main steps, a three-axis in situ magnetic compensation was achieved. The compensation method was suitable to design closed-loop spin-exchange relaxation-free magnetometer. By a combination of the magnetic compensation and the optimization, the magnetic field sensitivity was approximately 4 fT/Hz(1/2), which was mainly dominated by the noise of the magnetic shield.
Design of a 6 TeV muon collider
Wang, M-H.; Nosochkov, Y.; Cai, Y.; ...
2016-09-09
Here, a preliminary design of a muon collider ring with the center of mass (CM) energy of 6 TeV is presented. The ring circumference is 6.3 km, and themore » $$\\beta$$ functions at collision point are 1 cm in each plane. The ring linear optics, the non-linear chromaticity compensation in the Interaction Region (IR), and the additional non-linear orthogonal correcting knobs are described. Magnet specifications are based on the maximum pole-tip field of 20T in dipoles and 15T in quadrupoles. Careful compensation of the non-linear chromatic and amplitude dependent effects provide a sufficiently large dynamic aperture for the momentum range of up to $$\\pm$$0.5% without considering magnet errors.« less
High Resolution Digital Radar Imaging of Rotating Objects
1980-06-01
associated with it is called motion compensation. 1.2. Problem Description Consider a rigid body as shown in figure 1.1 rotating with its axis normal to the...vector of an arbitrary point B on the target referenced to the target reference point C as shown in Fig. 3.1.1. The entire rigid body is moving with...relationships. Since x is a vector on a rigid body , its tangential velocity (ixx-) is the only velocity component it has. Hence, Ad _T X. Also from
NASA Astrophysics Data System (ADS)
Dar, Aasif Bashir; Jha, Rakesh Kumar
2017-03-01
Various dispersion compensation units are presented and evaluated in this paper. These dispersion compensation units include dispersion compensation fiber (DCF), DCF merged with fiber Bragg grating (FBG) (joint technique), and linear, square root, and cube root chirped tanh apodized FBG. For the performance evaluation 10 Gb/s NRZ transmission system over 100-km-long single-mode fiber is used. The three chirped FBGs are optimized individually to yield pulse width reduction percentage (PWRP) of 86.66, 79.96, 62.42% for linear, square root, and cube root, respectively. The DCF and Joint technique both provide a remarkable PWRP of 94.45 and 96.96%, respectively. The performance of optimized linear chirped tanh apodized FBG and DCF is compared for long-haul transmission system on the basis of quality factor of received signal. For both the systems maximum transmission distance is calculated such that quality factor is ≥ 6 at the receiver and result shows that performance of FBG is comparable to that of DCF with advantages of very low cost, small size and reduced nonlinear effects.
Some Applications Of Semigroups And Computer Algebra In Discrete Structures
NASA Astrophysics Data System (ADS)
Bijev, G.
2009-11-01
An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.
Investigation into Model-Based Fuzzy Logic Control
1993-12-01
of the linearized plant as a function of r ................... 3-3 3.2. Model of Compensator G (s) with r externally defined .................... 3-4...and three zeros will be added to the compensator. 3-3 he Figure 3.2 Model of Compensator G (s) with r externally defined The form of the compensator...with disturbance rejection is: = (s2 + a + r )(8 + 45)f G (s) + + - (3.3) a(s + 4.5)(a + 200+ Notice that in order to achieve disturbance rejection yet
Calculation of biochemical net reactions and pathways by using matrix operations.
Alberty, R A
1996-01-01
Pathways for net biochemical reactions can be calculated by using a computer program that solves systems of linear equations. The coefficients in the linear equations are the stoichiometric numbers in the biochemical equations for the system. The solution of the system of linear equations is a vector of the stoichiometric numbers of the reactions in the pathway for the net reaction; this is referred to as the pathway vector. The pathway vector gives the number of times the various reactions have to occur to produce the desired net reaction. Net reactions may involve unknown numbers of ATP, ADP, and Pi molecules. The numbers of ATP, ADP, and Pi in a desired net reaction can be calculated in a two-step process. In the first step, the pathway is calculated by solving the system of linear equations for an abbreviated stoichiometric number matrix without ATP, ADP, Pi, NADred, and NADox. In the second step, the stoichiometric numbers in the desired net reaction, which includes ATP, ADP, Pi, NADred, and NADox, are obtained by multiplying the full stoichiometric number matrix by the calculated pathway vector. PMID:8804633
Automated parton-shower variations in PYTHIA 8
Mrenna, S.; Skands, P.
2016-10-03
In the era of precision physics measurements at the LHC, efficient and exhaustive estimations of theoretical uncertainties play an increasingly crucial role. In the context of Monte Carlo (MC) event generators, the estimation of such uncertainties traditionally requires independent MC runs for each variation, for a linear increase in total run time. In this work, we report on an automated evaluation of the dominant (renormalization-scale and nonsingular) perturbative uncertainties in the pythia 8 event generator, with only a modest computational overhead. Each generated event is accompanied by a vector of alternative weights (one for each uncertainty variation), with each set separatelymore » preserving the total cross section. Explicit scale-compensating terms can be included, reflecting known coefficients of higher-order splitting terms and reducing the effect of the variations. In conclusion, the formalism also allows for the enhancement of rare partonic splittings, such as g→bb¯ and q→qγ, to obtain weighted samples enriched in these splittings while preserving the correct physical Sudakov factors.« less
Mohammed, Nazmi A; Solaiman, Mohammad; Aly, Moustafa H
2014-10-10
In this work, various dispersion compensation methods are designed and evaluated to search for a cost-effective technique with remarkable dispersion compensation and a good pulse shape. The techniques consist of different chirp functions applied to a tanh fiber Bragg grating (FBG), a dispersion compensation fiber (DCF), and a DCF merged with an optimized linearly chirped tanh FBG (joint technique). The techniques are evaluated using a standard 10 Gb/s optical link over a 100 km long haul. The linear chirp function is the most appropriate choice of chirping function, with a pulse width reduction percentage (PWRP) of 75.15%, lower price, and poor pulse shape. The DCF yields an enhanced PWRP of 93.34% with a better pulse quality; however, it is the most costly of the evaluated techniques. Finally, the joint technique achieved the optimum PWRP (96.36%) among all the evaluated techniques and exhibited a remarkable pulse shape; it is less costly than the DCF, but more expensive than the chirped tanh FBG.
Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI
NASA Astrophysics Data System (ADS)
Seregni, M.; Paganelli, C.; Lee, D.; Greer, P. B.; Baroni, G.; Keall, P. J.; Riboldi, M.
2016-01-01
In-room cine-MRI guidance can provide non-invasive target localization during radiotherapy treatment. However, in order to cope with finite imaging frequency and system latencies between target localization and dose delivery, tumour motion prediction is required. This work proposes a framework for motion prediction dedicated to cine-MRI guidance, aiming at quantifying the geometric uncertainties introduced by this process for both tumour tracking and beam gating. The tumour position, identified through scale invariant features detected in cine-MRI slices, is estimated at high-frequency (25 Hz) using three independent predictors, one for each anatomical coordinate. Linear extrapolation, auto-regressive and support vector machine algorithms are compared against systems that use no prediction or surrogate-based motion estimation. Geometric uncertainties are reported as a function of image acquisition period and system latency. Average results show that the tracking error RMS can be decreased down to a [0.2; 1.2] mm range, for acquisition periods between 250 and 750 ms and system latencies between 50 and 300 ms. Except for the linear extrapolator, tracking and gating prediction errors were, on average, lower than those measured for surrogate-based motion estimation. This finding suggests that cine-MRI guidance, combined with appropriate prediction algorithms, could relevantly decrease geometric uncertainties in motion compensated treatments.
Vector Potential Generation for Numerical Relativity Simulations
NASA Astrophysics Data System (ADS)
Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian
2017-01-01
Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436
NASA Astrophysics Data System (ADS)
Faisal, Mohammad; Bala, Animesh; Roy Chowdhury, Kanan; Mia, Md. Borhan
2018-07-01
A triangular lattice photonic crystal fibre is presented in this paper for residual dispersion compensation. The fibre exhibits a flattened negative dispersion of -992.01 ± 6.93 ps/(nm-km) over S+C+L wavelength bands and -995.83 ± 0.42 ps/(nm-km) over C-band. The birefringence is about 4.4 × 10-2 at the excitation wavelength of 1550 nm which is also very high. Full vector finite element method (FEM) with a perfectly matched absorbing layer (PML) boundary condition is applied to numerically investigate the guiding properties of this PCF. The fibre operates at fundamental mode only. All these properties endorse this fibre as a suitable candidate for compensating residual dispersion and polarization maintaining applications.
Torque Compensator for Mirror Mountings
NASA Technical Reports Server (NTRS)
Howe, S. D.
1983-01-01
Device nulls flexural distributions of pivotal torques. Magnetic compensator for flexing pivot torque consists of opposing fixed and movable magnet bars. Magnetic torque varies nonlinearly as function of angle of tilt of movable bar. Positions of fixed magnets changed to improve magnetic torque linearity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro
Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less
Compound gravity receptor polarization vectors evidenced by linear vestibular evoked potentials
NASA Technical Reports Server (NTRS)
Jones, S. M.; Jones, T. A.; Bell, P. L.; Taylor, M. J.
2001-01-01
The utricle and saccule are gravity receptor organs of the vestibular system. These receptors rely on a high-density otoconial membrane to detect linear acceleration and the position of the cranium relative to Earth's gravitational vector. The linear vestibular evoked potential (VsEP) has been shown to be an effective non-invasive functional test specifically for otoconial gravity receptors (Jones et al., 1999). Moreover, there is some evidence that the VsEP can be used to independently test utricular and saccular function (Taylor et al., 1997; Jones et al., 1998). Here we characterize compound macular polarization vectors for the utricle and saccule in hatchling chickens. Pulsed linear acceleration stimuli were presented in two axes, the dorsoventral (DV, +/- Z axis) to isolate the saccule, and the interaural (IA, +/- Y axis) to isolate the utricle. Traditional signal averaging was used to resolve responses recorded from the surface of the skull. Latency and amplitude of eighth nerve components of the linear VsEP were measured. Gravity receptor responses exhibited clear preferences for one stimulus direction in each axis. With respect to each utricular macula, lateral translation in the IA axis produced maximum ipsilateral response amplitudes with substantially greater amplitude intensity (AI) slopes than medially directed movement. Downward caudal motions in the DV axis produced substantially larger response amplitudes and AI slopes. The results show that the macula lagena does not contribute to the VsEP compound polarization vectors of the sacculus and utricle. The findings suggest further that preferred compound vectors for the utricle depend on the pars externa (i.e. lateral hair cell field) whereas for the saccule they depend on pars interna (i.e. superior hair cell fields). These data provide evidence that maculae saccule and utricle can be selectively evaluated using the linear VsEP.
Interpreting linear support vector machine models with heat map molecule coloring
2011-01-01
Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Jiancheng; Wang, Tao, E-mail: wangtaowt@aspe.buaa.edu.cn; Quan, Wei
2014-06-15
A novel method to compensate the residual magnetic field for an atomic magnetometer consisting of two perpendicular beams of polarizations was demonstrated in this paper. The method can realize magnetic compensation in the case where the pumping rate of the probe beam cannot be ignored. In the experiment, the probe beam is always linearly polarized, whereas, the probe beam contains a residual circular component due to the imperfection of the polarizer, which leads to the pumping effect of the probe beam. A simulation of the probe beam's optical rotation and pumping rate was demonstrated. At the optimized points, the wavelengthmore » of the probe beam was optimized to achieve the largest optical rotation. Although, there is a small circular component in the linearly polarized probe beam, the pumping rate of the probe beam was non-negligible at the optimized wavelength which if ignored would lead to inaccuracies in the magnetic field compensation. Therefore, the dynamic equation of spin evolution was solved by considering the pumping effect of the probe beam. Based on the quasi-static solution, a novel magnetic compensation method was proposed, which contains two main steps: (1) the non-pumping compensation and (2) the sequence compensation with a very specific sequence. After these two main steps, a three-axis in situ magnetic compensation was achieved. The compensation method was suitable to design closed-loop spin-exchange relaxation-free magnetometer. By a combination of the magnetic compensation and the optimization, the magnetic field sensitivity was approximately 4 fT/Hz{sup 1/2}, which was mainly dominated by the noise of the magnetic shield.« less
NASA Astrophysics Data System (ADS)
Arratia, Cristobal
2014-11-01
A simple construction will be shown, which reveals a general property satisfied by the evolution in time of a state vector composed by a superposition of orthogonal eigenmodes of a linear dynamical system. This property results from the conservation of the inner product between such state vectors evolving forward and backwards in time, and it can be simply evaluated from the state vector and its first and second time derivatives. This provides an efficient way to characterize, instantaneously along any specific phase-space trajectory of the linear system, the relevance of the non-normality of the linearized Navier-Stokes operator on the energy (or any other norm) gain or decay of small perturbations. Examples of this characterization applied to stationary or time dependent base flows will be shown. CONICYT, Concurso de Apoyo al Retorno de Investigadores del Extranjero, folio 821320055.
NASA Astrophysics Data System (ADS)
Barnaś, Dawid; Bieniasz, Lesław K.
2017-07-01
We have recently developed a vectorized Thomas solver for quasi-block tridiagonal linear algebraic equation systems using Streaming SIMD Extensions (SSE) and Advanced Vector Extensions (AVX) in operations on dense blocks [D. Barnaś and L. K. Bieniasz, Int. J. Comput. Meth., accepted]. The acceleration caused by vectorization was observed for large block sizes, but was less satisfactory for small blocks. In this communication we report on another version of the solver, optimized for small blocks of size up to four rows and/or columns.
2012-03-09
equation is a product of a complex basis vector in Jackson and a linear combination of plane wave functions. We convert both the amplitudes and the...wave function arguments from complex scalars to complex vectors . This conversion allows us to separate the electric field vector and the imaginary...magnetic field vector , because exponentials of imaginary scalars convert vectors to imaginary vectors and vice versa, while ex- ponentials of imaginary
Energy compensation after sprint- and high-intensity interval training.
Schubert, Matthew M; Palumbo, Elyse; Seay, Rebekah F; Spain, Katie K; Clarke, Holly E
2017-01-01
Many individuals lose less weight than expected in response to exercise interventions when considering the increased energy expenditure of exercise (ExEE). This is due to energy compensation in response to ExEE, which may include increases in energy intake (EI) and decreases in non-exercise physical activity (NEPA). We examined the degree of energy compensation in healthy young men and women in response to interval training. Data were examined from a prior study in which 24 participants (mean age, BMI, & VO2max = 28 yrs, 27.7 kg•m-2, and 32 mL∙kg-1∙min-1) completed either 4 weeks of sprint-interval training or high-intensity interval training. Energy compensation was calculated from changes in body composition (air displacement plethysmography) and exercise energy expenditure was calculated from mean heart rate based on the heart rate-VO2 relationship. Differences between high (≥ 100%) and low (< 100%) levels of energy compensation were assessed. Linear regressions were utilized to determine associations between energy compensation and ΔVO2max, ΔEI, ΔNEPA, and Δresting metabolic rate. Very large individual differences in energy compensation were noted. In comparison to individuals with low levels of compensation, individuals with high levels of energy compensation gained fat mass, lost fat-free mass, and had lower change scores for VO2max and NEPA. Linear regression results indicated that lower levels of energy compensation were associated with increases in ΔVO2max (p < 0.001) and ΔNEPA (p < 0.001). Considerable variation exists in response to short-term, low dose interval training. In agreement with prior work, increases in ΔVO2max and ΔNEPA were associated with lower energy compensation. Future studies should focus on identifying if a dose-response relationship for energy compensation exists in response to interval training, and what underlying mechanisms and participant traits contribute to the large variation between individuals.
Temperature compensation via cooperative stability in protein degradation
NASA Astrophysics Data System (ADS)
Peng, Yuanyuan; Hasegawa, Yoshihiko; Noman, Nasimul; Iba, Hitoshi
2015-08-01
Temperature compensation is a notable property of circadian oscillators that indicates the insensitivity of the oscillator system's period to temperature changes; the underlying mechanism, however, is still unclear. We investigated the influence of protein dimerization and cooperative stability in protein degradation on the temperature compensation ability of two oscillators. Here, cooperative stability means that high-order oligomers are more stable than their monomeric counterparts. The period of an oscillator is affected by the parameters of the dynamic system, which in turn are influenced by temperature. We adopted the Repressilator and the Atkinson oscillator to analyze the temperature sensitivity of their periods. Phase sensitivity analysis was employed to evaluate the period variations of different models induced by perturbations to the parameters. Furthermore, we used experimental data provided by other studies to determine the reasonable range of parameter temperature sensitivity. We then applied the linear programming method to the oscillatory systems to analyze the effects of protein dimerization and cooperative stability on the temperature sensitivity of their periods, which reflects the ability of temperature compensation in circadian rhythms. Our study explains the temperature compensation mechanism for circadian clocks. Compared with the no-dimer mathematical model and linear model for protein degradation, our theoretical results show that the nonlinear protein degradation caused by cooperative stability is more beneficial for realizing temperature compensation of the circadian clock.
NASA Astrophysics Data System (ADS)
Glass, Alexis; Fukudome, Kimitoshi
2004-12-01
A sound recording of a plucked string instrument is encoded and resynthesized using two stages of prediction. In the first stage of prediction, a simple physical model of a plucked string is estimated and the instrument excitation is obtained. The second stage of prediction compensates for the simplicity of the model in the first stage by encoding either the instrument excitation or the model error using warped linear prediction. These two methods of compensation are compared with each other, and to the case of single-stage warped linear prediction, adjustments are introduced, and their applications to instrument synthesis and MPEG4's audio compression within the structured audio format are discussed.
Cai, Jian; Yuan, Shenfang; Wang, Tongguang
2016-01-01
The results of Lamb wave identification for the aerospace structures could be easily affected by the nonlinear-dispersion characteristics. In this paper, dispersion compensation of Lamb waves is of particular concern. Compared with the similar research works on the traditional signal domain transform methods, this study is based on signal construction from the viewpoint of nonlinear wavenumber linearization. Two compensation methods of linearly-dispersive signal construction (LDSC) and non-dispersive signal construction (NDSC) are proposed. Furthermore, to improve the compensation effect, the influence of the signal construction process on the other crucial signal properties, including the signal waveform and amplitude spectrum, is considered during the investigation. The linear-dispersion and non-dispersion effects are firstly analyzed. Then, after the basic signal construction principle is explored, the numerical realization of LDSC and NDSC is discussed, in which the signal waveform and amplitude spectrum preservation is especially regarded. Subsequently, associated with the delay-and-sum algorithm, LDSC or NDSC is employed for high spatial resolution damage imaging, so that the adjacent multi-damage or quantitative imaging capacity of Lamb waves can be strengthened. To verify the proposed signal construction and damage imaging methods, the experimental and numerical validation is finally arranged on the aluminum plates. PMID:28772366
Cai, Jian; Yuan, Shenfang; Wang, Tongguang
2016-12-23
The results of Lamb wave identification for the aerospace structures could be easily affected by the nonlinear-dispersion characteristics. In this paper, dispersion compensation of Lamb waves is of particular concern. Compared with the similar research works on the traditional signal domain transform methods, this study is based on signal construction from the viewpoint of nonlinear wavenumber linearization. Two compensation methods of linearly-dispersive signal construction (LDSC) and non-dispersive signal construction (NDSC) are proposed. Furthermore, to improve the compensation effect, the influence of the signal construction process on the other crucial signal properties, including the signal waveform and amplitude spectrum, is considered during the investigation. The linear-dispersion and non-dispersion effects are firstly analyzed. Then, after the basic signal construction principle is explored, the numerical realization of LDSC and NDSC is discussed, in which the signal waveform and amplitude spectrum preservation is especially regarded. Subsequently, associated with the delay-and-sum algorithm, LDSC or NDSC is employed for high spatial resolution damage imaging, so that the adjacent multi-damage or quantitative imaging capacity of Lamb waves can be strengthened. To verify the proposed signal construction and damage imaging methods, the experimental and numerical validation is finally arranged on the aluminum plates.
NASA Astrophysics Data System (ADS)
Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong
2017-10-01
This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.
High-speed optical three-axis vector magnetometry based on nonlinear Hanle effect in rubidium vapor
NASA Astrophysics Data System (ADS)
Azizbekyan, Hrayr; Shmavonyan, Svetlana; Khanbekyan, Aleksandr; Movsisyan, Marina; Papoyan, Aram
2017-07-01
The magnetic-field-compensation optical vector magnetometer based on the nonlinear Hanle effect in alkali metal vapor allowing two-axis measurement operation has been further elaborated for three-axis performance, along with significant reduction of measurement time. The upgrade was achieved by implementing a two-beam resonant excitation configuration and a fast maximum searching algorithm. Results of the proof-of-concept experiments, demonstrating 1 μT B-field resolution, are presented. The applied interest and capability of the proposed technique is analyzed.
A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.
We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ''Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbationsmore » that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.« less
NASA Technical Reports Server (NTRS)
Samba, A. S.
1985-01-01
The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.
Design of digital load torque observer in hybrid electric vehicle
NASA Astrophysics Data System (ADS)
Sun, Yukun; Zhang, Haoming; Wang, Yinghai
2008-12-01
In hybrid electric vehicle, engine begain to work only when motor was in high speed in order to decrease tail gas emission. However, permanent magnet motor was sensitive to its load, adding engine to the system always made its speed drop sharply, which caused engine to work in low efficiency again and produced much more environment pollution. Dynamic load torque model of permanent magnet synchronous motor is established on the basic of motor mechanical equation and permanent magnet synchronous motor vector control theory, Full- digital load torque observer and compensation control system is made based on TMS320F2407A. Experiment results prove load torque observer and compensation control system can detect and compensate torque disturbing effectively, which can solve load torque disturbing and decrease gas pollution of hybrid electric vehicle.
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
A new implementation of the CMRH method for solving dense linear systems
NASA Astrophysics Data System (ADS)
Heyouni, M.; Sadok, H.
2008-04-01
The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Guerrini, A M; Ascenzioni, F; Tribioli, C; Donini, P
1985-01-01
Linear plasmids were constructed by adding telomeres prepared from Tetrahymena pyriformis rDNA to a circular hybrid Escherichia coli-yeast vector and transforming Saccharomyces cerevisiae. The parental vector contained the entire 2 mu yeast circle and the LEU gene from S. cerevisiae. Three transformed clones were shown to contain linear plasmids which were characterized by restriction analysis and shown to be rearranged versions of the desired linear plasmids. The plasmids obtained were imperfect palindromes: part of the parental vector was present in duplicated form, part as unique sequences and part was absent. The sequences that had been lost included a large portion of the 2 mu circle. The telomeres were approximately 450 bp longer than those of T. pyriformis. DNA prepared from transformed S. cerevisiae clones was used to transform Schizosaccharomyces pombe. The transformed S. pombe clones contained linear plasmids identical in structure to their linear parents in S. cerevisiae. No structural re-arrangements or integration into S. pombe was observed. Little or no telomere growth had occurred after transfer from S. cerevisiae to S. pombe. A model is proposed to explain the genesis of the plasmids. Images Fig. 1. Fig. 2. Fig. 4. PMID:3896773
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
Linear Transformation Method for Multinuclide Decay Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding Yuan
2010-12-29
A linear transformation method for generic multinuclide decay calculations is presented together with its properties and implications. The method takes advantage of the linear form of the decay solution N(t) = F(t)N{sub 0}, where N(t) is a column vector that represents the numbers of atoms of the radioactive nuclides in the decay chain, N{sub 0} is the initial value vector of N(t), and F(t) is a lower triangular matrix whose time-dependent elements are independent of the initial values of the system.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Modeling Interferometric Structures with Birefringent Elements: A Linear Vector-Space Formalism
2013-11-12
Annapolis, Maryland ViNceNt J. Urick FraNk BUcholtz Photonics Technology Branch Optical Sciences Division i REPORT DOCUMENTATION PAGE Form...a Linear Vector-Space Formalism Nicholas J. Frigo,1 Vincent J. Urick , and Frank Bucholtz Naval Research Laboratory, Code 5650 4555 Overlook Avenue, SW...Annapolis, MD Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited 29 Vincent J. Urick (202) 767-9352 Coupled mode
NASA Technical Reports Server (NTRS)
Paunonen, Matti
1993-01-01
A method for compensating for the effect of the varying travel time of a transmitted laser pulse to a satellite is described. The 'observed minus predicted' range differences then appear to be linear, which makes data screening or use in range gating more effective.
Compensator improvement for multivariable control systems
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.; Gresham, L. L.
1977-01-01
A theory and the associated numerical technique are developed for an iterative design improvement of the compensation for linear, time-invariant control systems with multiple inputs and multiple outputs. A strict constraint algorithm is used in obtaining a solution of the specified constraints of the control design. The result of the research effort is the multiple input, multiple output Compensator Improvement Program (CIP). The objective of the Compensator Improvement Program is to modify in an iterative manner the free parameters of the dynamic compensation matrix so that the system satisfies frequency domain specifications. In this exposition, the underlying principles of the multivariable CIP algorithm are presented and the practical utility of the program is illustrated with space vehicle related examples.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
CSI, optimal control, and accelerometers: Trials and tribulations
NASA Technical Reports Server (NTRS)
Benjamin, Brian J.; Sesak, John R.
1994-01-01
New results concerning optimal design with accelerometers are presented. These results show that the designer must be concerned with the stability properties of two Linear Quadratic Gaussian (LQG) compensators, one of which does not explicitly appear in the closed-loop system dynamics. The new concepts of virtual and implemented compensators are introduced to cope with these subtleties. The virtual compensator appears in the closed-loop system dynamics and the implemented compensator appears in control electronics. The stability of one compensator does not guarantee the stability of the other. For strongly stable (robust) systems, both compensators should be stable. The presence of controlled and uncontrolled modes in the system results in two additional forms of the compensator with corresponding terms that are of like form, but opposite sign, making simultaneous stabilization of both the virtual and implemented compensator difficult. A new design algorithm termed sensor augmentation is developed that aids stabilization of these compensator forms by incorporating a static augmentation term associated with the uncontrolled modes in the design process.
NASA Astrophysics Data System (ADS)
Calderone, Luigi; Pinola, Licia; Varoli, Vincenzo
1992-04-01
The paper describes an analytical procedure to optimize the feed-forward compensation for any PWM dc/dc converters. The aims of achieving zero dc audiosusceptibility was found to be possible for the buck, buck-boost, Cuk, and SEPIC cells; for the boost converter, however, only nonoptimal compensation is feasible. Rules for the design of PWM controllers and procedures for the evaluation of the hardware-introduced errors are discussed. A PWM controller implementing the optimal feed-forward compensation for buck-boost, Cuk, and SEPIC cells is described and fully experimentally characterized.
A computerized compensator design algorithm with launch vehicle applications
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1976-01-01
This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.
Quantum Linear System Algorithm for Dense Matrices.
Wossnig, Leonard; Zhao, Zhikuan; Prakash, Anupam
2018-02-02
Solving linear systems of equations is a frequently encountered problem in machine learning and optimization. Given a matrix A and a vector b the task is to find the vector x such that Ax=b. We describe a quantum algorithm that achieves a sparsity-independent runtime scaling of O(κ^{2}sqrt[n]polylog(n)/ε) for an n×n dimensional A with bounded spectral norm, where κ denotes the condition number of A, and ε is the desired precision parameter. This amounts to a polynomial improvement over known quantum linear system algorithms when applied to dense matrices, and poses a new state of the art for solving dense linear systems on a quantum computer. Furthermore, an exponential improvement is achievable if the rank of A is polylogarithmic in the matrix dimension. Our algorithm is built upon a singular value estimation subroutine, which makes use of a memory architecture that allows for efficient preparation of quantum states that correspond to the rows of A and the vector of Euclidean norms of the rows of A.
Wang, Hsin-Wei; Lin, Ya-Chi; Pai, Tun-Wen; Chang, Hao-Teng
2011-01-01
Epitopes are antigenic determinants that are useful because they induce B-cell antibody production and stimulate T-cell activation. Bioinformatics can enable rapid, efficient prediction of potential epitopes. Here, we designed a novel B-cell linear epitope prediction system called LEPS, Linear Epitope Prediction by Propensities and Support Vector Machine, that combined physico-chemical propensity identification and support vector machine (SVM) classification. We tested the LEPS on four datasets: AntiJen, HIV, a newly generated PC, and AHP, a combination of these three datasets. Peptides with globally or locally high physicochemical propensities were first identified as primitive linear epitope (LE) candidates. Then, candidates were classified with the SVM based on the unique features of amino acid segments. This reduced the number of predicted epitopes and enhanced the positive prediction value (PPV). Compared to four other well-known LE prediction systems, the LEPS achieved the highest accuracy (72.52%), specificity (84.22%), PPV (32.07%), and Matthews' correlation coefficient (10.36%).
f(R) gravity on non-linear scales: the post-Friedmann expansion and the vector potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D.B.; Bruni, M.; Koyama, K.
2015-07-01
Many modified gravity theories are under consideration in cosmology as the source of the accelerated expansion of the universe and linear perturbation theory, valid on the largest scales, has been examined in many of these models. However, smaller non-linear scales offer a richer phenomenology with which to constrain modified gravity theories. Here, we consider the Hu-Sawicki form of f(R) gravity and apply the post-Friedmann approach to derive the leading order equations for non-linear scales, i.e. the equations valid in the Newtonian-like regime. We reproduce the standard equations for the scalar field, gravitational slip and the modified Poisson equation in amore » coherent framework. In addition, we derive the equation for the leading order correction to the Newtonian regime, the vector potential. We measure this vector potential from f(R) N-body simulations at redshift zero and one, for two values of the f{sub R{sub 0}} parameter. We find that the vector potential at redshift zero in f(R) gravity can be close to 50% larger than in GR on small scales for |f{sub R{sub 0}}|=1.289 × 10{sup −5}, although this is less for larger scales, earlier times and smaller values of the f{sub R{sub 0}} parameter. Similarly to in GR, the small amplitude of this vector potential suggests that the Newtonian approximation is highly accurate for f(R) gravity, and also that the non-linear cosmological behaviour of f(R) gravity can be completely described by just the scalar potentials and the f(R) field.« less
1979-06-01
also extended to the class of stabilizable systems and the required compensator shown to possess a separation property. Finally the design methodology...Page 1.1. Block diagram of transfer function given in (1.28) ........... 15 3.3.1. Compensator structure for controllable and stabilizable systems ...response will be stable. The implemented output feedback control law will stabilize the total closed loop system . n nn Let [uin and iJi= 1 be the
Xing, Haifeng; Hou, Bo; Lin, Zhihui; Guo, Meifeng
2017-10-13
MEMS (Micro Electro Mechanical System) gyroscopes have been widely applied to various fields, but MEMS gyroscope random drift has nonlinear and non-stationary characteristics. It has attracted much attention to model and compensate the random drift because it can improve the precision of inertial devices. This paper has proposed to use wavelet filtering to reduce noise in the original data of MEMS gyroscopes, then reconstruct the random drift data with PSR (phase space reconstruction), and establish the model for the reconstructed data by LSSVM (least squares support vector machine), of which the parameters were optimized using CPSO (chaotic particle swarm optimization). Comparing the effect of modeling the MEMS gyroscope random drift with BP-ANN (back propagation artificial neural network) and the proposed method, the results showed that the latter had a better prediction accuracy. Using the compensation of three groups of MEMS gyroscope random drift data, the standard deviation of three groups of experimental data dropped from 0.00354°/s, 0.00412°/s, and 0.00328°/s to 0.00065°/s, 0.00072°/s and 0.00061°/s, respectively, which demonstrated that the proposed method can reduce the influence of MEMS gyroscope random drift and verified the effectiveness of this method for modeling MEMS gyroscope random drift.
The morphing of geographical features by Fourier transformation.
Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.
The feasibility of using methylene blue sensitized polyvinylalcohol film as a linear polarizer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jyothilakshmi, K.; Anju, K. S.; Arathy, K.
2014-01-28
Linear light polarizing films selectively transmit radiations vibrating along an electromagnetic radiation vector and selectively absorb radiations vibrating along a second electromagnetic radiation vector. It happens according to the anisotropy of the film . In the present study the polarization effects of methylene blue sensitized polyvinyl alcohol is investigated. The polarization effects on the dye concentration, heating and stretching of film also are evaluated.
Application of optimal control theory to the design of the NASA/JPL 70-meter antenna servos
NASA Technical Reports Server (NTRS)
Alvarez, L. S.; Nickerson, J.
1989-01-01
The application of Linear Quadratic Gaussian (LQG) techniques to the design of the 70-m axis servos is described. Linear quadratic optimal control and Kalman filter theory are reviewed, and model development and verification are discussed. Families of optimal controller and Kalman filter gain vectors were generated by varying weight parameters. Performance specifications were used to select final gain vectors.
NASA Technical Reports Server (NTRS)
Kincaid, D. R.; Young, D. M.
1984-01-01
Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.
R-parametrization and its role in classification of linear multivariable feedback systems
NASA Technical Reports Server (NTRS)
Chen, Robert T. N.
1988-01-01
A classification of all the compensators that stabilize a given general plant in a linear, time-invariant multi-input, multi-output feedback system is developed. This classification, along with the associated necessary and sufficient conditions for stability of the feedback system, is achieved through the introduction of a new parameterization, referred to as R-Parameterization, which is a dual of the familiar Q-Parameterization. The classification is made to the stability conditions of the compensators and the plant by themselves; and necessary and sufficient conditions are based on the stability of Q and R themselves.
Voltage regulation in linear induction accelerators
Parsons, William M.
1992-01-01
Improvement in voltage regulation in a Linear Induction Accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance.
The primer vector in linear, relative-motion equations. [spacecraft trajectory optimization
NASA Technical Reports Server (NTRS)
1980-01-01
Primer vector theory is used in analyzing a set of linear, relative-motion equations - the Clohessy-Wiltshire equations - to determine the criteria and necessary conditions for an optimal, N-impulse trajectory. Since the state vector for these equations is defined in terms of a linear system of ordinary differential equations, all fundamental relations defining the solution of the state and costate equations, and the necessary conditions for optimality, can be expressed in terms of elementary functions. The analysis develops the analytical criteria for improving a solution by (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of (1) fixed-end conditions, two-impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized rendezvous problem. A sequence of rendezvous problems is solved to illustrate the analysis and the computational procedure.
Unsymmetric Lanczos model reduction and linear state function observer for flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1991-01-01
This report summarizes part of the research work accomplished during the second year of a two-year grant. The research, entitled 'Application of Lanczos Vectors to Control Design of Flexible Structures' concerns various ways to use Lanczos vectors and Krylov vectors to obtain reduced-order mathematical models for use in the dynamic response analyses and in control design studies. This report presents a one-sided, unsymmetric block Lanczos algorithm for model reduction of structural dynamics systems with unsymmetric damping matrix, and a control design procedure based on the theory of linear state function observers to design low-order controllers for flexible structures.
Tolmachov, Oleg E
2012-05-01
The cell-specific and long-term expression of therapeutic transgenes often requires a full array of native gene control elements including distal enhancers, regulatory introns and chromatin organisation sequences. The delivery of such extended gene expression modules to human cells can be accomplished with non-viral high-molecular-weight DNA vectors, in particular with several classes of linear DNA vectors. All high-molecular-weight DNA vectors are susceptible to damage by shear stress, and while for some of the vectors the harmful impact of shear stress can be minimised through the transformation of the vectors to compact topological configurations by supercoiling and/or knotting, linear DNA vectors with terminal loops or covalently attached terminal proteins cannot be self-compacted in this way. In this case, the only available self-compacting option is self-entangling, which can be defined as the folding of single DNA molecules into a configuration with mutual restriction of molecular motion by the individual segments of bent DNA. A negatively charged phosphate backbone makes DNA self-repulsive, so it is reasonable to assume that a certain number of 'sticky points' dispersed within DNA could facilitate the entangling by bringing DNA segments into proximity and by interfering with the DNA slipping away from the entanglement. I propose that the spontaneous entanglement of vector DNA can be enhanced by the interlacing of the DNA with sites capable of mutual transient attachment through the formation of non-B-DNA forms, such as interacting cruciform structures, inter-segment triplexes, slipped-strand DNA, left-handed duplexes (Z-forms) or G-quadruplexes. It is expected that the non-B-DNA based entanglement of the linear DNA vectors would consist of the initial transient and co-operative non-B-DNA mediated binding events followed by tight self-ensnarement of the vector DNA. Once in the nucleoplasm of the target human cells, the DNA can be disentangled by type II topoisomerases. The technology for such self-entanglement can be an avenue for the improvement of gene delivery with high-molecular-weight naked DNA using therapeutically important methods associated with considerable shear stress. Priority applications include in vivo muscle electroporation and sonoporation for Duchenne muscular dystrophy patients, aerosol inhalation to reach the target lung cells of cystic fibrosis patients and bio-ballistic delivery to skin melanomas with the vector DNA adsorbed on gold or tungsten projectiles. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rottmann, Joerg; Berbeco, Ross
Purpose: Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. Methods: The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable tomore » overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal–external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Results: Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). Conclusions: A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.« less
Rottmann, Joerg; Berbeco, Ross
2014-12-01
Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable to overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal-external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.
Costa, Márcio Holsbach
2017-12-01
Feedback cancellation in a hearing aid is essential for achieving high maximum stable gain to compensate for the losses in severe to profound hearing impaired people. The performance of adaptive feedback cancellers has been studied by assuming that the feedback path can be modeled as a linear system. However, limited dynamic range, low-cost loudspeakers, and nonlinear power amplifiers may distort the hearing aid output signal. In this way, linear-based predictions of the canceller performance may lead to significant deviations from its actual behavior. This work presents a theoretical performance analysis of a Least Mean Square based shadow filter that is applied to set up the coefficients of a feedback canceller, which is subject to a static saturation type nonlinearity at the output of the direct path. Deterministic recursive equations are derived to predict the mean square feedback error and the mean coefficient vector evolution between updates of the feedback canceller. These models are defined as functions of the canceller parameters and input signal statistics. Comparisons with Monte Carlo simulations show the provided models are highly accurate under the considered assumptions. The developed models allow inferences about the potential impact of an overdriven loudspeaker over the transient performance of the direct method feedback canceller, serving as insightful tools for understanding the involved mechanisms. Copyright © 2017 Elsevier Ltd. All rights reserved.
Network compensation for missing sensors
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1991-01-01
A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.
Two-dimensional spatiotemporal coding of linear acceleration in vestibular nuclei neurons
NASA Technical Reports Server (NTRS)
Angelaki, D. E.; Bush, G. A.; Perachio, A. A.
1993-01-01
Response properties of vertical (VC) and horizontal (HC) canal/otolith-convergent vestibular nuclei neurons were studied in decerebrate rats during stimulation with sinusoidal linear accelerations (0.2-1.4 Hz) along different directions in the head horizontal plane. A novel characteristic of the majority of tested neurons was the nonzero response often elicited during stimulation along the "null" direction (i.e., the direction perpendicular to the maximum sensitivity vector, Smax). The tuning ratio (Smin gain/Smax gain), a measure of the two-dimensional spatial sensitivity, depended on stimulus frequency. For most vestibular nuclei neurons, the tuning ratio was small at the lowest stimulus frequencies and progressively increased with frequency. Specifically, HC neurons were characterized by a flat Smax gain and an approximately 10-fold increase of Smin gain per frequency decade. Thus, these neurons encode linear acceleration when stimulated along their maximum sensitivity direction, and the rate of change of linear acceleration (jerk) when stimulated along their minimum sensitivity direction. While the Smax vectors were distributed throughout the horizontal plane, the Smin vectors were concentrated mainly ipsilaterally with respect to head acceleration and clustered around the naso-occipital head axis. The properties of VC neurons were distinctly different from those of HC cells. The majority of VC cells showed decreasing Smax gains and small, relatively flat, Smin gains as a function of frequency. The Smax vectors were distributed ipsilaterally relative to the induced (apparent) head tilt. In type I anterior or posterior VC neurons, Smax vectors were clustered around the projection of the respective ipsilateral canal plane onto the horizontal head plane. These distinct spatial and temporal properties of HC and VC neurons during linear acceleration are compatible with the spatiotemporal organization of the horizontal and the vertical/torsional ocular responses, respectively, elicited in the rat during linear translation in the horizontal head plane. In addition, the data suggest a spatially and temporally specific and selective otolith/canal convergence. We propose that the central otolith system is organized in canal coordinates such that there is a close alignment between the plane of angular acceleration (canal) sensitivity and the plane of linear acceleration (otolith) sensitivity in otolith/canal-convergent vestibular nuclei neurons.
Evaluation of a new breast-shaped compensation filter for a newly built breast imaging system
NASA Astrophysics Data System (ADS)
Cai, Weixing; Ning, Ruola; Zhang, Yan; Conover, David
2007-03-01
A new breast-shaped compensation filter has been designed and fabricated for breast imaging using our newly built breast imaging (CBCTBI) system, which is able to scan an uncompressed breast with pendant geometry. The shape of this compensation filter is designed based on an average-sized breast phantom. Unlike conventional bow-tie compensation filters, its cross-sectional profile varies along the chest wall-to-nipple direction for better compensation for the shape of a breast. Breast phantoms of three different sizes are used to evaluate the performance of this compensation filter. The reconstruction image quality was studied and compared to that obtained without the compensation filter in place. The uniformity of linear attenuation coefficient and the uniformity of noise distribution are significantly improved, and the contrast-to-noise ratios (CNR) of small lesions near the chest wall are increased as well. Multi-normal image method is used in the reconstruction process to correct compensation flood field and to reduce ring artifacts.
Synthesis procedure for linear time-varying feedback systems with large parameter ignorance
NASA Technical Reports Server (NTRS)
Mcdonald, T. E., Jr.
1972-01-01
The development of synthesis procedures for linear time-varying feedback systems is considered. It is assumed that the plant can be described by linear differential equations with time-varying coefficients; however, ignorance is associated with the plant in that only the range of the time-variations are known instead of exact functional relationships. As a result of this plant ignorance the use of time-varying compensation is ineffective so that only time-invariant compensation is employed. In addition, there is a noise source at the plant output which feeds noise through the feedback elements to the plant input. Because of this noise source the gain of the feedback elements must be as small as possible. No attempt is made to develop a stability criterion for time-varying systems in this work.
NASA Technical Reports Server (NTRS)
Fichtl, G. H.; Holland, R. L.
1978-01-01
A stochastic model of spacecraft motion was developed based on the assumption that the net torque vector due to crew activity and rocket thruster firings is a statistically stationary Gaussian vector process. The process had zero ensemble mean value, and the components of the torque vector were mutually stochastically independent. The linearized rigid-body equations of motion were used to derive the autospectral density functions of the components of the spacecraft rotation vector. The cross-spectral density functions of the components of the rotation vector vanish for all frequencies so that the components of rotation were mutually stochastically independent. The autospectral and cross-spectral density functions of the induced gravity environment imparted to scientific apparatus rigidly attached to the spacecraft were calculated from the rotation rate spectral density functions via linearized inertial frame to body-fixed principal axis frame transformation formulae. The induced gravity process was a Gaussian one with zero mean value. Transformation formulae were used to rotate the principal axis body-fixed frame to which the rotation rate and induced gravity vector were referred to a body-fixed frame in which the components of the induced gravity vector were stochastically independent. Rice's theory of exceedances was used to calculate expected exceedance rates of the components of the rotation and induced gravity vector processes.
Fundamental Principles of Classical Mechanics: a Geometrical Perspectives
NASA Astrophysics Data System (ADS)
Lam, Kai S.
2014-07-01
Classical mechanics is the quantitative study of the laws of motion for oscopic physical systems with mass. The fundamental laws of this subject, known as Newton's Laws of Motion, are expressed in terms of second-order differential equations governing the time evolution of vectors in a so-called configuration space of a system (see Chapter 12). In an elementary setting, these are usually vectors in 3-dimensional Euclidean space, such as position vectors of point particles; but typically they can be vectors in higher dimensional and more abstract spaces. A general knowledge of the mathematical properties of vectors, not only in their most intuitive incarnations as directed arrows in physical space but as elements of abstract linear vector spaces, and those of linear operators (transformations) on vector spaces as well, is then indispensable in laying the groundwork for both the physical and the more advanced mathematical - more precisely topological and geometrical - concepts that will prove to be vital in our subject. In this beginning chapter we will review these properties, and introduce the all-important related notions of dual spaces and tensor products of vector spaces. The notational convention for vectorial and tensorial indices used for the rest of this book (except when otherwise specified) will also be established...
An effective temperature compensation approach for ultrasonic hydrogen sensors
NASA Astrophysics Data System (ADS)
Tan, Xiaolong; Li, Min; Arsad, Norhana; Wen, Xiaoyan; Lu, Haifei
2018-03-01
Hydrogen is a kind of promising clean energy resource with a wide application prospect, which will, however, cause a serious security issue upon the leakage of hydrogen gas. The measurement of its concentration is of great significance. In a traditional approach of ultrasonic hydrogen sensing, a temperature drift of 0.1 °C results in a concentration error of about 250 ppm, which is intolerable for trace amount of gas sensing. In order to eliminate the influence brought by temperature drift, we propose a feasible approach named as linear compensation algorithm, which utilizes the linear relationship between the pulse count and temperature to compensate for the pulse count error (ΔN) caused by temperature drift. Experimental results demonstrate that our proposed approach is capable of improving the measurement accuracy and can easily detect sub-100 ppm of hydrogen concentration under variable temperature conditions.
Heterodyne interferometry method for calibration of a Soleil-Babinet compensator.
Zhang, Wenjing; Zhang, Zhiwei
2016-05-20
A method based on the common-path heterodyne interferometer system is proposed for the calibration of a Soleil-Babinet compensator. In this heterodyne interferometer system, which consists of two acousto-optic modulators, the compensator being calibrated is inserted into the signal path. By using the reference beam as the benchmark and a lock-in amplifier (SR844) as the phase retardation collector, retardations of 0 and λ (one wavelength) can be located accurately, and an arbitrary retardation between 0 and λ can also be measured accurately and continuously. By fitting a straight line to the experimental data, we obtained a linear correlation coefficient (R) of 0.995, which indicates that this system is capable of linear phase detection. The experimental results demonstrate determination accuracies of 0.212° and 0.26° and measurement precisions of 0.054° and 0.608° for retardations of 0 and λ, respectively.
Domain walls of linear polarization in isotropic Kerr media
NASA Astrophysics Data System (ADS)
Louis, Y.; Sheppard, A. P.; Haelterman, M.
1997-09-01
We present a new type of domain-wall vector solitary waves in isotropic self-defocusing Kerr media. These domain walls consist of localized structures separating uniform field domains of orthogonal linear polarizations. They result from the interplay between diffraction, self-phase modulation and cross-phase modulation in cases where the nonlinear birefringence coefficient B = {χxyyx(3)}/{χxxxx(3)} is negative. Numerical simulations show that these new vector solitary waves are stable.
Control of Grid Connected Photovoltaic System Using Three-Level T-Type Inverter
NASA Astrophysics Data System (ADS)
Zorig, Abdelmalik; Belkeiri, Mohammed; Barkat, Said; Rabhi, Abdelhamid
2016-08-01
Three-level T-Type inverter (3LT2I) topology has numerous advantageous compared to three-level neutral-point-clamped (NPC) inverter. The main benefits of 3LT2I inverter are the efficiency, inverter cost, switching losses, and the quality of output voltage waveforms. In this paper, a photovoltaic distributed generation system based on dual-stage topology of DC-DC boost converter and 3LT2I is introduced. To that end, a decoupling control strategy of 3LT2I is proposed to control the current injected into the grid, reactive power compensation, and DC-link voltage. The resulting system is able to extract the maximum power from photovoltaic generator, to achieve sinusoidal grid currents, and to ensure reactive power compensation. The voltage-balancing control of two split DC capacitors of the 3LT2I is achieved using three-level space vector modulation with balancing strategy based on the effective use of the redundant switching states of the inverter voltage vectors. The proposed system performance is investigated at different operating conditions.
Efficient low-bit-rate adaptive mesh-based motion compensation technique
NASA Astrophysics Data System (ADS)
Mahmoud, Hanan A.; Bayoumi, Magdy A.
2001-08-01
This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).
Betti numbers of graded modules and cohomology of vector bundles
NASA Astrophysics Data System (ADS)
Eisenbud, David; Schreyer, Frank-Olaf
2009-07-01
In the remarkable paper Graded Betti numbers of Cohen-Macaulay modules and the multiplicity conjecture, Mats Boij and Jonas Soederberg conjectured that the Betti table of a Cohen-Macaulay module over a polynomial ring is a positive linear combination of Betti tables of modules with pure resolutions. We prove a strengthened form of their conjectures. Applications include a proof of the Multiplicity Conjecture of Huneke and Srinivasan and a proof of the convexity of a fan naturally associated to the Young lattice. With the same tools we show that the cohomology table of any vector bundle on projective space is a positive rational linear combination of the cohomology tables of what we call supernatural vector bundles. Using this result we give new bounds on the slope of a vector bundle in terms of its cohomology.
The morphing of geographical features by Fourier transformation
Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang
2018-01-01
This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344
Fridman, Gene Y.; Davidovics, Natan S.; Dai, Chenkai; Migliaccio, Americo A.
2010-01-01
There is no effective treatment available for individuals unable to compensate for bilateral profound loss of vestibular sensation, which causes chronic disequilibrium and blurs vision by disrupting vestibulo-ocular reflexes that normally stabilize the eyes during head movement. Previous work suggests that a multichannel vestibular prosthesis can emulate normal semicircular canals by electrically stimulating vestibular nerve branches to encode head movements detected by mutually orthogonal gyroscopes affixed to the skull. Until now, that approach has been limited by current spread resulting in distortion of the vestibular nerve activation pattern and consequent inability to accurately encode head movements throughout the full 3-dimensional (3D) range normally transduced by the labyrinths. We report that the electrically evoked 3D angular vestibulo-ocular reflex exhibits vector superposition and linearity to a sufficient degree that a multichannel vestibular prosthesis incorporating a precompensatory 3D coordinate transformation to correct misalignment can accurately emulate semicircular canals for head rotations throughout the range of 3D axes normally transduced by a healthy labyrinth. PMID:20177732
Polarization locked vector solitons and axis instability in optical fiber.
Cundiff, Steven T.; Collings, Brandon C.; Bergman, Keren
2000-09-01
We experimentally observe polarization-locked vector solitons in optical fiber. Polarization locked-vector solitons use nonlinearity to preserve their polarization state despite the presence of birefringence. To achieve conditions where the delicate balance between nonlinearity and birefringence can survive, we studied the polarization evolution of the pulses circulating in a laser constructed entirely of optical fiber. We observe two distinct states with fixed polarization. This first state occurs for very small values birefringence and is elliptically polarized. We measure the relative phase between orthogonal components along the two principal axes to be +/-pi/2. The relative amplitude varies linearly with the magnitude of the birefringence. This state is a polarization locked vector soliton. The second, linearly polarized, state occurs for larger values of birefringence. The second state is due to the fast axis instability. We provide complete characterization of these states, and present a physical explanation of both of these states and the stability of the polarization locked vector solitons. (c) 2000 American Institute of Physics.
Polarization locked vector solitons and axis instability in optical fiber
NASA Astrophysics Data System (ADS)
Cundiff, Steven T.; Collings, Brandon C.; Bergman, Keren
2000-09-01
We experimentally observe polarization-locked vector solitons in optical fiber. Polarization locked-vector solitons use nonlinearity to preserve their polarization state despite the presence of birefringence. To achieve conditions where the delicate balance between nonlinearity and birefringence can survive, we studied the polarization evolution of the pulses circulating in a laser constructed entirely of optical fiber. We observe two distinct states with fixed polarization. This first state occurs for very small values birefringence and is elliptically polarized. We measure the relative phase between orthogonal components along the two principal axes to be ±π/2. The relative amplitude varies linearly with the magnitude of the birefringence. This state is a polarization locked vector soliton. The second, linearly polarized, state occurs for larger values of birefringence. The second state is due to the fast axis instability. We provide complete characterization of these states, and present a physical explanation of both of these states and the stability of the polarization locked vector solitons.
Voltage regulation in linear induction accelerators
Parsons, W.M.
1992-12-29
Improvement in voltage regulation in a linear induction accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core is disclosed. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance. 4 figs.
Chebabhi, Ali; Fellah, Mohammed Karim; Kessal, Abdelhalim; Benkhoris, Mohamed F
2016-07-01
In this paper is proposed a new balancing three-level three dimensional space vector modulation (B3L-3DSVM) strategy which uses a redundant voltage vectors to realize precise control and high-performance for a three phase three-level four-leg neutral point clamped (NPC) inverter based Shunt Active Power Filter (SAPF) for eliminate the source currents harmonics, reduce the magnitude of neutral wire current (eliminate the zero-sequence current produced by single-phase nonlinear loads), and to compensate the reactive power in the three-phase four-wire electrical networks. This strategy is proposed in order to gate switching pulses generation, dc bus voltage capacitors balancing (conserve equal voltage of the two dc bus capacitors), and to switching frequency reduced and fixed of inverter switches in same times. A Nonlinear Back Stepping Controllers (NBSC) are used for regulated the dc bus voltage capacitors and the SAPF injected currents to robustness, stabilizing the system and to improve the response and to eliminate the overshoot and undershoot of traditional PI (Proportional-Integral). Conventional three-level three dimensional space vector modulation (C3L-3DSVM) and B3L-3DSVM are calculated and compared in terms of error between the two dc bus voltage capacitors, SAPF output voltages and THDv, THDi of source currents, magnitude of source neutral wire current, and the reactive power compensation under unbalanced single phase nonlinear loads. The success, robustness, and the effectiveness of the proposed control strategies are demonstrated through simulation using Sim Power Systems and S-Function of MATLAB/SIMULINK. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Computation of output feedback gains for linear stochastic systems using the Zangwill-Powell method
NASA Technical Reports Server (NTRS)
Kaufman, H.
1977-01-01
Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell.
Contextual Multi-armed Bandits under Feature Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Seyoung; Nam, Jun Hyun; Mo, Sangwoo
We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features. For the case of identical noise on features across actions, we propose an algorithm, coined NLinRel, having O(T⁷/₈(log(dT)+K√d)) regret bound for T rounds, K actions, and d-dimensional feature vectors. Next, for the case of non-identical noise, we observe that popular linear hypotheses including NLinRel are impossible to achieve such sub-linear regret. Instead, under assumption of Gaussian feature vectors, we prove that a greedy algorithm has O(T²/₃√log d)regret bound with respect to the optimal linear hypothesis. Utilizing our theoretical understanding on the Gaussian case,more » we also design a practical variant of NLinRel, coined Universal-NLinRel, for arbitrary feature distributions. It first runs NLinRel for finding the ‘true’ coefficient vector using feature uncertainties and then adjust it to minimize its regret using the statistical feature information. We justify the performance of Universal-NLinRel on both synthetic and real-world datasets.« less
Quantum Linear System Algorithm for Dense Matrices
NASA Astrophysics Data System (ADS)
Wossnig, Leonard; Zhao, Zhikuan; Prakash, Anupam
2018-02-01
Solving linear systems of equations is a frequently encountered problem in machine learning and optimization. Given a matrix A and a vector b the task is to find the vector x such that A x =b . We describe a quantum algorithm that achieves a sparsity-independent runtime scaling of O (κ2√{n }polylog(n )/ɛ ) for an n ×n dimensional A with bounded spectral norm, where κ denotes the condition number of A , and ɛ is the desired precision parameter. This amounts to a polynomial improvement over known quantum linear system algorithms when applied to dense matrices, and poses a new state of the art for solving dense linear systems on a quantum computer. Furthermore, an exponential improvement is achievable if the rank of A is polylogarithmic in the matrix dimension. Our algorithm is built upon a singular value estimation subroutine, which makes use of a memory architecture that allows for efficient preparation of quantum states that correspond to the rows of A and the vector of Euclidean norms of the rows of A .
Vector-beam solutions of Maxwell's wave equation.
Hall, D G
1996-01-01
The Hermite-Gauss and Laguerre-Gauss modes are well-known beam solutions of the scalar Helmholtz equation in the paraxial limit. As such, they describe linearly polarized fields or single Cartesian components of vector fields. The vector wave equation admits, in the paraxial limit, of a family of localized Bessel-Gauss beam solutions that can describe the entire transverse electric field. Two recently reported solutions are members of this family of vector Bessel-Gauss beam modes.
Embedding of multidimensional time-dependent observations.
Barnard, J P; Aldrich, C; Gerber, M
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
Embedding of multidimensional time-dependent observations
NASA Astrophysics Data System (ADS)
Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
Linear time-invariant controller design for two-channel decentralized control systems
NASA Technical Reports Server (NTRS)
Desoer, Charles A.; Gundes, A. Nazli
1987-01-01
This paper analyzes a linear time-invariant two-channel decentralized control system with a 2 x 2 strictly proper plant. It presents an algorithm for the algebraic design of a class of decentralized compensators which stabilize the given plant.
Method and System for Temporal Filtering in Video Compression Systems
NASA Technical Reports Server (NTRS)
Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim
2011-01-01
Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.
On hidden symmetries of extremal Kerr-NUT-AdS-dS black holes
NASA Astrophysics Data System (ADS)
Rasmussen, Jørgen
2011-05-01
It is well known that the Kerr-NUT-AdS-dS black hole admits two linearly independent Killing vectors and possesses a hidden symmetry generated by a rank-2 Killing tensor. The near-horizon geometry of an extremal Kerr-NUT-AdS-dS black hole admits four linearly independent Killing vectors, and we show how the hidden symmetry of the black hole itself is carried over by means of a modified Killing-Yano potential which is given explicitly. We demonstrate that the corresponding Killing tensor of the near-horizon geometry is reducible as it can be expressed in terms of the Casimir operators formed by the four Killing vectors.
NASA Astrophysics Data System (ADS)
Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo
2016-07-01
Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.
Algorithms for solving large sparse systems of simultaneous linear equations on vector processors
NASA Technical Reports Server (NTRS)
David, R. E.
1984-01-01
Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1986-01-01
An abstract approximation theory and computational methods are developed for the determination of optimal linear-quadratic feedback control, observers and compensators for infinite dimensional discrete-time systems. Particular attention is paid to systems whose open-loop dynamics are described by semigroups of operators on Hilbert spaces. The approach taken is based on the finite dimensional approximation of the infinite dimensional operator Riccati equations which characterize the optimal feedback control and observer gains. Theoretical convergence results are presented and discussed. Numerical results for an example involving a heat equation with boundary control are presented and used to demonstrate the feasibility of the method.
NASA Technical Reports Server (NTRS)
Burns, John A.; Marrekchi, Hamadi
1993-01-01
The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.
NASA Astrophysics Data System (ADS)
Liu, Yang; Song, Fazhi; Yang, Xiaofeng; Dong, Yue; Tan, Jiubin
2018-06-01
Due to their structural simplicity, linear motors are increasingly receiving attention for use in high velocity and high precision applications. The force ripple, as a space-periodic disturbance, however, would deteriorate the achievable dynamic performance. Conventional force ripple measurement approaches are time-consuming and have high requirements on the experimental conditions. In this paper, a novel learning identification algorithm is proposed for force ripple intelligent measurement and compensation. Existing identification schemes always use all the error signals to update the parameters in the force ripple. However, the error induced by noise is non-effective for force ripple identification, and even deteriorates the identification process. In this paper only the most pertinent information in the error signal is utilized for force ripple identification. Firstly, the effective error signals caused by the reference trajectory and the force ripple are extracted by projecting the overall error signals onto a subspace spanned by the physical model of the linear motor as well as the sinusoidal model of the force ripple. The time delay in the linear motor is compensated in the basis functions. Then, a data-driven approach is proposed to design the learning gain. It balances the trade-off between convergence speed and robustness against noise. Simulation and experimental results validate the proposed method and confirm its effectiveness and superiority.
The Vertical Linear Fractional Initialization Problem
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
1999-01-01
This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.
Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Adamian, A.
1988-01-01
An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.
AZTEC. Parallel Iterative method Software for Solving Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.; Shadid, J.; Tuminaro, R.
1995-07-01
AZTEC is an interactive library that greatly simplifies the parrallelization process when solving the linear systems of equations Ax=b where A is a user supplied n X n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. AZTEC is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparse unstructured matricesmore » for parallel solutions.« less
Finite-dimensional compensators for infinite-dimensional systems via Galerkin-type approximation
NASA Technical Reports Server (NTRS)
Ito, Kazufumi
1990-01-01
In this paper existence and construction of stabilizing compensators for linear time-invariant systems defined on Hilbert spaces are discussed. An existence result is established using Galkerin-type approximations in which independent basis elements are used instead of the complete set of eigenvectors. A design procedure based on approximate solutions of the optimal regulator and optimal observer via Galerkin-type approximation is given and the Schumacher approach is used to reduce the dimension of compensators. A detailed discussion for parabolic and hereditary differential systems is included.
On classical mechanical systems with non-linear constraints
NASA Astrophysics Data System (ADS)
Terra, Gláucio; Kobayashi, Marcelo H.
2004-03-01
In the present work, we analyze classical mechanical systems with non-linear constraints in the velocities. We prove that the d'Alembert-Chetaev trajectories of a constrained mechanical system satisfy both Gauss' principle of least constraint and Hölder's principle. In the case of a free mechanics, they also satisfy Hertz's principle of least curvature if the constraint manifold is a cone. We show that the Gibbs-Maggi-Appell (GMA) vector field (i.e. the second-order vector field which defines the d'Alembert-Chetaev trajectories) conserves energy for any potential energy if, and only if, the constraint is homogeneous (i.e. if the Liouville vector field is tangent to the constraint manifold). We introduce the Jacobi-Carathéodory metric tensor and prove Jacobi-Carathéodory's theorem assuming that the constraint manifold is a cone. Finally, we present a version of Liouville's theorem on the conservation of volume for the flow of the GMA vector field.
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less
Held, Elizabeth; Cape, Joshua; Tintle, Nathan
2016-01-01
Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.
Sum, Chi Hong; Nafissi, Nafiseh; Slavcev, Roderick A.; Wettig, Shawn
2015-01-01
In combination with novel linear covalently closed (LCC) DNA minivectors, referred to as DNA ministrings, a gemini surfactant-based synthetic vector for gene delivery has been shown to exhibit enhanced delivery and bioavailability while offering a heightened safety profile. Due to topological differences from conventional circular covalently closed (CCC) plasmid DNA vectors, the linear topology of LCC DNA ministrings may present differences with regards to DNA interaction and the physicochemical properties influencing DNA-surfactant interactions in the formulation of lipoplexed particles. In this study, N,N-bis(dimethylhexadecyl)-α,ω-propanediammonium(16-3-16)gemini-based synthetic vectors, incorporating either CCC plasmid or LCC DNA ministrings, were characterized and compared with respect to particle size, zeta potential, DNA encapsulation, DNase sensitivity, and in vitro transgene delivery efficacy. Through comparative analysis, differences between CCC plasmid DNA and LCC DNA ministrings led to variations in the physical properties of the resulting lipoplexes after complexation with 16-3-16 gemini surfactants. Despite the size disparities between the plasmid DNA vectors (CCC) and DNA ministrings (LCC), differences in DNA topology resulted in the generation of lipoplexes of comparable particle sizes. The capacity for ministring (LCC) derived lipoplexes to undergo complete counterion release during lipoplex formation contributed to improved DNA encapsulation, protection from DNase degradation, and in vitro transgene delivery. PMID:26561857
Dikbas, Salih; Altunbasak, Yucel
2013-08-01
In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.
Motion compensated shape error concealment.
Schuster, Guido M; Katsaggelos, Aggelos K
2006-02-01
The introduction of Video Objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper we propose a post-processing shape error concealment technique that uses the motion compensated boundary information of the previously received alpha-plane. The proposed approach is based on matching received boundary segments in the current frame to the boundary in the previous frame. This matching is achieved by finding a maximally smooth motion vector field. After the current boundary segments are matched to the previous boundary, the missing boundary pieces are reconstructed by motion compensation. Experimental results demonstrating the performance of the proposed motion compensated shape error concealment method, and comparing it with the previously proposed weighted side matching method are presented.
Observation of Polarization-Locked Vector Solitons in an Optical Fiber
NASA Astrophysics Data System (ADS)
Cundiff, S. T.; Collings, B. C.; Akhmediev, N. N.; Soto-Crespo, J. M.; Bergman, K.; Knox, W. H.
1999-05-01
We observe polarization-locked vector solitons in a mode-locked fiber laser. Temporal vector solitons have components along both birefringent axes. Despite different phase velocities due to linear birefringence, the relative phase of the components is locked at +/-π/2. The value of +/-π/2 and component magnitudes agree with a simple analysis of the Kerr nonlinearity. These fragile phase-locked vector solitons have been the subject of much theoretical conjecture, but have previously eluded experimental observation.
Lu, Zhao; Sun, Jing; Butts, Kenneth
2014-05-01
Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.
Implementation and Assessment of Advanced Analog Vector-Matrix Processor
NASA Technical Reports Server (NTRS)
Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.
Pauchard, Y; Smith, M; Mintchev, M
2004-01-01
Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.
Acceleration and torque feedback for robotic control - Experimental results
NASA Technical Reports Server (NTRS)
Mclnroy, John E.; Saridis, George N.
1990-01-01
Gross motion control of robotic manipulators typically requires significant on-line computations to compensate for nonlinear dynamics due to gravity, Coriolis, centripetal, and friction nonlinearities. One controller proposed by Luo and Saridis avoids these computations by feeding back joint acceleration and torque. This study implements the controller on a Puma 600 robotic manipulator. Joint acceleration measurement is obtained by measuring linear accelerations of each joint, and deriving a computationally efficient transformation from the linear measurements to the angular accelerations. Torque feedback is obtained by using the previous torque sent to the joints. The implementation has stability problems on the Puma 600 due to the extremely high gains inherent in the feedback structure. Since these high gains excite frequency modes in the Puma 600, the algorithm is modified to decrease the gain inherent in the feedback structure. The resulting compensator is stable and insensitive to high frequency unmodeled dynamics. Moreover, a second compensator is proposed which uses acceleration and torque feedback, but still allows nonlinear terms to be fed forward. Thus, by feeding the increment in the easily calculated gravity terms forward, improved responses are obtained. Both proposed compensators are implemented, and the real time results are compared to those obtained with the computed torque algorithm.
NASA Astrophysics Data System (ADS)
Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han
2016-09-01
Motion control of the piezoactuator system over broadband frequencies is limited due to its inherent hysteresis and system dynamics. One of the suggested ways is to use feedforward controller to linearize the input-output relationship of the piezoactuator system. Although there have been many feedforward approaches, it is still a challenge to develop feedforward controller for the piezoactuator system at high frequency. Hence, this paper presents a comprehensive inversion approach in consideration of the coupling of hysteresis and dynamics. In this work, the influence of dynamics compensation on the input-output relationship of the piezoactuator system is investigated first. With system dynamics compensation, the input-output relationship of the piezoactuator system will be further represented as rate-dependent nonlinearity due to the inevitable dynamics compensation error, especially at high frequency. Base on this result, the feedforward controller composed by a cascade of linear dynamics inversion and rate-dependent nonlinearity inversion is developed. Then, the system identification of the comprehensive inversion approach is proposed. Finally, experimental results show that the proposed approach can improve the performance on tracking of both periodic and non-periodic trajectories at medium and high frequency compared with the conventional feedforward approaches.
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Inverting Monotonic Nonlinearities by Entropy Maximization.
Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, C.T.
Linear and nonlinear photochemistries of 1,4-diazabicyclo(2.2.2)octane (DABCO) are investigated at room temperature by using ArF (193 nm) and KrF (248 nm) lasers. With an unfocused beam geometry, DABCO vapor displays a strong fluorescence when excited at 248 nm, but it shows no detectable emission with 193-nm excitation. The linear photochemistry quantum yield for DABCO is determined as phi/sub p/(248nm) approx. 0.1 and phi/sub p/(193 nm) approx. 0.3. The main stable photochemical products are analyzed as C/sub 2/H/sub 4/ and C/sub 2/H/sub 2/ for 248- and 193-nm excitation, respectively. When focused beam excitation is used, both ArF and KrF lasers dissociatemore » DABCO molecules and give three strong radical emissions of CN*(B vector /sup 2/..sigma.. ..-->.. X vector /sup 2/ ..sigma../sup +/), CH*(A vector /sup 2/..delta.. ..-->.. X vector /sup 2/II), and C/sub 2/*(D vector /sup 3/II/sub g/ ..-->.. a vector /sup 3/II/sub u/). The time behavior, the laser power dependence, and the sample pressure dependence of these emissive radicals are examined. The possible mechanisms for the Rydberg state photochemistry of DABCO are discussed.« less
Compositional Verification with Abstraction, Learning, and SAT Solving
2015-05-01
arithmetic, and bit-vectors (currently, via bit-blasting). The front-end is based on an existing tool called UFO [8] which converts C programs to the Horn...supports propositional logic, linear arithmetic, and bit-vectors (via bit-blasting). The front-end is based on the tool UFO [8]. It encodes safety of...tool UFO [8]. The encoding in Horn-SMT only uses the theory of Linear Rational Arithmetic. All experiments were carried out on an Intel R© CoreTM2 Quad
Lerman, Gilad M; Levy, Uriel
2007-08-01
We study the tight-focusing properties of spatially variant vector optical fields with elliptical symmetry of linear polarization. We found the eccentricity of the incident polarized light to be an important parameter providing an additional degree of freedom assisting in controlling the field properties at the focus and allowing matching of the field distribution at the focus to the specific application. Applications of these space-variant polarized beams vary from lithography and optical storage to particle beam trapping and material processing.
Modeling Dengue vector population using remotely sensed data and machine learning.
Scavuzzo, Juan M; Trucco, Francisco; Espinosa, Manuel; Tauro, Carolina B; Abril, Marcelo; Scavuzzo, Carlos M; Frery, Alejandro C
2018-05-16
Mosquitoes are vectors of many human diseases. In particular, Aedes ægypti (Linnaeus) is the main vector for Chikungunya, Dengue, and Zika viruses in Latin America and it represents a global threat. Public health policies that aim at combating this vector require dependable and timely information, which is usually expensive to obtain with field campaigns. For this reason, several efforts have been done to use remote sensing due to its reduced cost. The present work includes the temporal modeling of the oviposition activity (measured weekly on 50 ovitraps in a north Argentinean city) of Aedes ægypti (Linnaeus), based on time series of data extracted from operational earth observation satellite images. We use are NDVI, NDWI, LST night, LST day and TRMM-GPM rain from 2012 to 2016 as predictive variables. In contrast to previous works which use linear models, we employ Machine Learning techniques using completely accessible open source toolkits. These models have the advantages of being non-parametric and capable of describing nonlinear relationships between variables. Specifically, in addition to two linear approaches, we assess a support vector machine, an artificial neural networks, a K-nearest neighbors and a decision tree regressor. Considerations are made on parameter tuning and the validation and training approach. The results are compared to linear models used in previous works with similar data sets for generating temporal predictive models. These new tools perform better than linear approaches, in particular nearest neighbor regression (KNNR) performs the best. These results provide better alternatives to be implemented operatively on the Argentine geospatial risk system that is running since 2012. Copyright © 2018 Elsevier B.V. All rights reserved.
A linear-dendritic cationic vector for efficient DNA grasp and delivery.
Yang, Bin; Sun, Yun-xia; Yi, Wen-jie; Yang, Juan; Liu, Chen-wei; Cheng, Han; Feng, Jun; Zhang, Xian-zheng; Zhuo, Ren-xi
2012-07-01
This paper presents an attempt to design an efficient and biocompatible cationic gene vector via structural optimization that favors the efficient utilization of amine groups for DNA condensation. To this end, a linear-dendritic block copolymer of methoxyl-poly(ethylene glycol)-dendritic polyglycerol-graft-tris(2-aminoethyl)amine (mPEG-DPG-g-TAEA) was prepared with specially designed multiple functions including strong DNA affinity, endosomal buffering and expected serum-tolerance. Based on the transfection in serum-free and serum-conditioned media, the influences of the polymer structures including the degree of polymerization of DPG and TAEA substitution degree were explored. As compared to polyethylenimine (M(w)=5 kDa) (PEI5k) with similar molecular weight and higher amine density, mPEG-DPG-g-TAEA displayed comparably high DNA affinity due to the special linear-dendritic architecture. Consequently, at very low N/P ratio, mPEG-DPG-g-TAEA vectors could mediate efficient in vitro luciferase expression at levels that are comparable with or even superior to the commercially available Lipofectamine™ 2000, while being apparently higher than PEI5k. The designed vectors exhibit considerably higher cell biocompatibility and better resistance against bovine serum albumin adsorption than PEI5k. The stability of the complexes on coincubation with heparin was found to be largely dependent on the polymer structure. As concluded from the comparative transfection study in the absence/presence of chloroquine, it is likely that the polycation itself could produce endosomal buffering. This linear-dendritic vector shows promising potential for the application of gene delivery. Copyright © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Klamt, Steffen; Regensburger, Georg; Gerstl, Matthias P; Jungreuthmayer, Christian; Schuster, Stefan; Mahadevan, Radhakrishnan; Zanghellini, Jürgen; Müller, Stefan
2017-04-01
Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks.
Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan
2017-01-01
Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903
NASA Astrophysics Data System (ADS)
Ohtsu, Masayasu
1991-04-01
An application of a moment tensor analysis to acoustic emission (AE) is studied to elucidate crack types and orientations of AE sources. In the analysis, simplified treatment is desirable, because hundreds of AE records are obtained from just one experiment and thus sophisticated treatment is realistically cumbersome. Consequently, a moment tensor inversion based on P wave amplitude is employed to determine six independent tensor components. Selecting only P wave portion from the full-space Green's function of homogeneous and isotropic material, a computer code named SiGMA (simplified Green's functions for the moment tensor analysis) is developed for the AE inversion analysis. To classify crack type and to determine crack orientation from moment tensor components, a unified decomposition of eigenvalues into a double-couple (DC) part, a compensated linear vector dipole (CLVD) part, and an isotropic part is proposed. The aim of the decomposition is to determine the proportion of shear contribution (DC) and tensile contribution (CLVD + isotropic) on AE sources and to classify cracks into a crack type of the dominant motion. Crack orientations determined from eigenvectors are presented as crack-opening vectors for tensile cracks and fault motion vectors for shear cracks, instead of stereonets. The SiGMA inversion and the unified decomposition are applied to synthetic data and AE waveforms detected during an in situ hydrofracturing test. To check the accuracy of the procedure, numerical experiments are performed on the synthetic waveforms, including cases with 10% random noise added. Results show reasonable agreement with assumed crack configurations. Although the maximum error is approximately 10% with respect to the ratios, the differences on crack orientations are less than 7°. AE waveforms detected by eight accelerometers deployed during the hydrofracturing test are analyzed. Crack types and orientations determined are in reasonable agreement with a predicted failure plane from borehole TV observation. The results suggest that tensile cracks are generated first at weak seams and then shear cracks follow on the opened joints.
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
Steering of Frequency Standards by the Use of Linear Quadratic Gaussian Control Theory
NASA Technical Reports Server (NTRS)
Koppang, Paul; Leland, Robert
1996-01-01
Linear quadratic Gaussian control is a technique that uses Kalman filtering to estimate a state vector used for input into a control calculation. A control correction is calculated by minimizing a quadratic cost function that is dependent on both the state vector and the control amount. Different penalties, chosen by the designer, are assessed by the controller as the state vector and control amount vary from given optimal values. With this feature controllers can be designed to force the phase and frequency differences between two standards to zero either more or less aggressively depending on the application. Data will be used to show how using different parameters in the cost function analysis affects the steering and the stability of the frequency standards.
NASA Astrophysics Data System (ADS)
Chui, Siu Lit; Lu, Ya Yan
2004-03-01
Wide-angle full-vector beam propagation methods (BPMs) for three-dimensional wave-guiding structures can be derived on the basis of rational approximants of a square root operator or its exponential (i.e., the one-way propagator). While the less accurate BPM based on the slowly varying envelope approximation can be efficiently solved by the alternating direction implicit (ADI) method, the wide-angle variants involve linear systems that are more difficult to handle. We present an efficient solver for these linear systems that is based on a Krylov subspace method with an ADI preconditioner. The resulting wide-angle full-vector BPM is used to simulate the propagation of wave fields in a Y branch and a taper.
Chui, Siu Lit; Lu, Ya Yan
2004-03-01
Wide-angle full-vector beam propagation methods (BPMs) for three-dimensional wave-guiding structures can be derived on the basis of rational approximants of a square root operator or its exponential (i.e., the one-way propagator). While the less accurate BPM based on the slowly varying envelope approximation can be efficiently solved by the alternating direction implicit (ADI) method, the wide-angle variants involve linear systems that are more difficult to handle. We present an efficient solver for these linear systems that is based on a Krylov subspace method with an ADI preconditioner. The resulting wide-angle full-vector BPM is used to simulate the propagation of wave fields in a Y branch and a taper.
2012-01-01
Background While safer than their viral counterparts, conventional non-viral gene delivery DNA vectors offer a limited safety profile. They often result in the delivery of unwanted prokaryotic sequences, antibiotic resistance genes, and the bacterial origins of replication to the target, which may lead to the stimulation of unwanted immunological responses due to their chimeric DNA composition. Such vectors may also impart the potential for chromosomal integration, thus potentiating oncogenesis. We sought to engineer an in vivo system for the quick and simple production of safer DNA vector alternatives that were devoid of non-transgene bacterial sequences and would lethally disrupt the host chromosome in the event of an unwanted vector integration event. Results We constructed a parent eukaryotic expression vector possessing a specialized manufactured multi-target site called “Super Sequence”, and engineered E. coli cells (R-cell) that conditionally produce phage-derived recombinase Tel (PY54), TelN (N15), or Cre (P1). Passage of the parent plasmid vector through R-cells under optimized conditions, resulted in rapid, efficient, and one step in vivo generation of mini lcc—linear covalently closed (Tel/TelN-cell), or mini ccc—circular covalently closed (Cre-cell), DNA constructs, separated from the backbone plasmid DNA. Site-specific integration of lcc plasmids into the host chromosome resulted in chromosomal disruption and 105 fold lower viability than that seen with the ccc counterpart. Conclusion We offer a high efficiency mini DNA vector production system that confers simple, rapid and scalable in vivo production of mini lcc DNA vectors that possess all the benefits of “minicircle” DNA vectors and virtually eliminate the potential for undesirable vector integration events. PMID:23216697
Nafissi, Nafiseh; Slavcev, Roderick
2012-12-06
While safer than their viral counterparts, conventional non-viral gene delivery DNA vectors offer a limited safety profile. They often result in the delivery of unwanted prokaryotic sequences, antibiotic resistance genes, and the bacterial origins of replication to the target, which may lead to the stimulation of unwanted immunological responses due to their chimeric DNA composition. Such vectors may also impart the potential for chromosomal integration, thus potentiating oncogenesis. We sought to engineer an in vivo system for the quick and simple production of safer DNA vector alternatives that were devoid of non-transgene bacterial sequences and would lethally disrupt the host chromosome in the event of an unwanted vector integration event. We constructed a parent eukaryotic expression vector possessing a specialized manufactured multi-target site called "Super Sequence", and engineered E. coli cells (R-cell) that conditionally produce phage-derived recombinase Tel (PY54), TelN (N15), or Cre (P1). Passage of the parent plasmid vector through R-cells under optimized conditions, resulted in rapid, efficient, and one step in vivo generation of mini lcc--linear covalently closed (Tel/TelN-cell), or mini ccc--circular covalently closed (Cre-cell), DNA constructs, separated from the backbone plasmid DNA. Site-specific integration of lcc plasmids into the host chromosome resulted in chromosomal disruption and 10(5) fold lower viability than that seen with the ccc counterpart. We offer a high efficiency mini DNA vector production system that confers simple, rapid and scalable in vivo production of mini lcc DNA vectors that possess all the benefits of "minicircle" DNA vectors and virtually eliminate the potential for undesirable vector integration events.
Linear Test Bed. Volume 2: Test Bed No. 2. [linear aerospike test bed for thrust vector control
NASA Technical Reports Server (NTRS)
1974-01-01
Test bed No. 2 consists of 10 combustors welded in banks of 5 to 2 symmetrical tubular nozzle assemblies, an upper stationary thrust frame, a lower thrust frame which can be hinged, a power package, a triaxial combustion wave ignition system, a pneumatic control system, pneumatically actuated propellant valves, a purge and drain system, and an electrical control system. The power package consists of the Mark 29-F fuel turbopump, the Mark 29-0 oxidizer turbopump, a gas generator assembly, and propellant ducting. The system, designated as a linear aerospike system, was designed to demonstrate the feasibility of the concept and to explore technology related to thrust vector control, thrust vector optimization, improved sequencing and control, and advanced ignition systems. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure at an engine mixture ratio of 5.5. With 10 combustors, the sea level thrust is 95,000 pounds.
NASA Astrophysics Data System (ADS)
Park, Kyoung-Duck; Raschke, Markus B.
2018-05-01
Controlling the propagation and polarization vectors in linear and nonlinear optical spectroscopy enables to probe the anisotropy of optical responses providing structural symmetry selective contrast in optical imaging. Here we present a novel tilted antenna-tip approach to control the optical vector-field by breaking the axial symmetry of the nano-probe in tip-enhanced near-field microscopy. This gives rise to a localized plasmonic antenna effect with significantly enhanced optical field vectors with control of both \\textit{in-plane} and \\textit{out-of-plane} components. We use the resulting vector-field specificity in the symmetry selective nonlinear optical response of second-harmonic generation (SHG) for a generalized approach to optical nano-crystallography and -imaging. In tip-enhanced SHG imaging of monolayer MoS$_2$ films and single-crystalline ferroelectric YMnO$_3$, we reveal nano-crystallographic details of domain boundaries and domain topology with enhanced sensitivity and nanoscale spatial resolution. The approach is applicable to any anisotropic linear and nonlinear optical response, and provides for optical nano-crystallographic imaging of molecular or quantum materials.
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo
2013-05-06
A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.
Transfer Alignment Error Compensator Design Based on Robust State Estimation
NASA Astrophysics Data System (ADS)
Lyou, Joon; Lim, You-Chol
This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.
Modeling and control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Mingori, D. L.
1988-01-01
This monograph presents integrated modeling and controller design methods for flexible structures. The controllers, or compensators, developed are optimal in the linear-quadratic-Gaussian sense. The performance objectives, sensor and actuator locations and external disturbances influence both the construction of the model and the design of the finite dimensional compensator. The modeling and controller design procedures are carried out in parallel to ensure compatibility of these two aspects of the design problem. Model reduction techniques are introduced to keep both the model order and the controller order as small as possible. A linear distributed, or infinite dimensional, model is the theoretical basis for most of the text, but finite dimensional models arising from both lumped-mass and finite element approximations also play an important role. A central purpose of the approach here is to approximate an optimal infinite dimensional controller with an implementable finite dimensional compensator. Both convergence theory and numerical approximation methods are given. Simple examples are used to illustrate the theory.
Li, Zhe; Erkilinc, M Sezer; Galdino, Lidia; Shi, Kai; Thomsen, Benn C; Bayvel, Polina; Killey, Robert I
2016-12-12
Single-polarization direct-detection transceivers may offer advantages compared to digital coherent technology for some metro, back-haul, access and inter-data center applications since they offer low-cost and complexity solutions. However, a direct-detection receiver introduces nonlinearity upon photo detection, since it is a square-law device, which results in signal distortion due to signal-signal beat interference (SSBI). Consequently, it is desirable to develop effective and low-cost SSBI compensation techniques to improve the performance of such transceivers. In this paper, we compare the performance of a number of recently proposed digital signal processing-based SSBI compensation schemes, including the use of single- and two-stage linearization filters, an iterative linearization filter and a SSBI estimation and cancellation technique. Their performance is assessed experimentally using a 7 × 25 Gb/s wavelength division multiplexed (WDM) single-sideband 16-QAM Nyquist-subcarrier modulation system operating at a net information spectral density of 2.3 (b/s)/Hz.
Temperature and neuronal circuit function: compensation, tuning and tolerance.
Robertson, R Meldrum; Money, Tomas G A
2012-08-01
Temperature has widespread and diverse effects on different subcellular components of neuronal circuits making it difficult to predict precisely the overall influence on output. Increases in temperature generally increase the output rate in either an exponential or a linear manner. Circuits with a slow output tend to respond exponentially with relatively high Q(10)s, whereas those with faster outputs tend to respond in a linear fashion with relatively low temperature coefficients. Different attributes of the circuit output can be compensated by virtue of opposing processes with similar temperature coefficients. At the extremes of the temperature range, differences in the temperature coefficients of circuit mechanisms cannot be compensated and the circuit fails, often with a reversible loss of ion homeostasis. Prior experience of temperature extremes activates conserved processes of phenotypic plasticity that tune neuronal circuits to be better able to withstand the effects of temperature and to recover more rapidly from failure. Copyright © 2012 Elsevier Ltd. All rights reserved.
On bipartite pure-state entanglement structure in terms of disentanglement
NASA Astrophysics Data System (ADS)
Herbut, Fedor
2006-12-01
Schrödinger's disentanglement [E. Schrödinger, Proc. Cambridge Philos. Soc. 31, 555 (1935)], i.e., remote state decomposition, as a physical way to study entanglement, is carried one step further with respect to previous work in investigating the qualitative side of entanglement in any bipartite state vector. Remote measurement (or, equivalently, remote orthogonal state decomposition) from previous work is generalized to remote linearly independent complete state decomposition both in the nonselective and the selective versions. The results are displayed in terms of commutative square diagrams, which show the power and beauty of the physical meaning of the (antiunitary) correlation operator inherent in the given bipartite state vector. This operator, together with the subsystem states (reduced density operators), constitutes the so-called correlated subsystem picture. It is the central part of the antilinear representation of a bipartite state vector, and it is a kind of core of its entanglement structure. The generalization of previously elaborated disentanglement expounded in this article is a synthesis of the antilinear representation of bipartite state vectors, which is reviewed, and the relevant results of [Cassinelli et al., J. Math. Anal. Appl. 210, 472 (1997)] in mathematical analysis, which are summed up. Linearly independent bases (finite or infinite) are shown to be almost as useful in some quantum mechanical studies as orthonormal ones. Finally, it is shown that linearly independent remote pure-state preparation carries the highest probability of occurrence. This singles out linearly independent remote influence from all possible ones.
Quantitative tissue polarimetry using polar decomposition of 3 x 3 Mueller matrix
NASA Astrophysics Data System (ADS)
Swami, M. K.; Manhas, S.; Buddhiwant, P.; Ghosh, N.; Uppal, A.; Gupta, P. K.
2007-05-01
Polarization properties of any optical system are completely described by a sixteen-element (4 x 4) matrix called Mueller matrix, which transform the Stokes vector describing the polarization properties of incident light to the stokes vector of scattered light. Measurement of all the elements of the matrix requires a minimum of sixteen measurements involving both linear and circularly polarized light. However, for many diagnostic applications, it would be useful if all the polarization parameters of the medium (depolarization (Δ), differential attenuation of two orthogonal polarizations, that is, diattenuation (d), and differential phase retardance of two orthogonal polarizations, i.e., retardance (δ )) can be quantified with linear polarization measurements alone. In this paper we show that for a turbid medium, like biological tissue, where the depolarization of linearly polarized light arises primarily due to the randomization of the field vector's direction by multiple scattering, the polarization parameters of the medium can be obtained from the nine Mueller matrix elements involving linear polarization measurements only. Use of the approach for measurement of polarization parameters (Δ, d and δ) of normal and malignant (squamous cell carcinoma) tissues resected from human oral cavity are presented.
BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.
AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
Saravanan, Vijayakumar; Gautham, Namasivayam
2015-10-01
Proteins embody epitopes that serve as their antigenic determinants. Epitopes occupy a central place in integrative biology, not to mention as targets for novel vaccine, pharmaceutical, and systems diagnostics development. The presence of T-cell and B-cell epitopes has been extensively studied due to their potential in synthetic vaccine design. However, reliable prediction of linear B-cell epitope remains a formidable challenge. Earlier studies have reported discrepancy in amino acid composition between the epitopes and non-epitopes. Hence, this study proposed and developed a novel amino acid composition-based feature descriptor, Dipeptide Deviation from Expected Mean (DDE), to distinguish the linear B-cell epitopes from non-epitopes effectively. In this study, for the first time, only exact linear B-cell epitopes and non-epitopes have been utilized for developing the prediction method, unlike the use of epitope-containing regions in earlier reports. To evaluate the performance of the DDE feature vector, models have been developed with two widely used machine-learning techniques Support Vector Machine and AdaBoost-Random Forest. Five-fold cross-validation performance of the proposed method with error-free dataset and dataset from other studies achieved an overall accuracy between nearly 61% and 73%, with balance between sensitivity and specificity metrics. Performance of the DDE feature vector was better (with accuracy difference of about 2% to 12%), in comparison to other amino acid-derived features on different datasets. This study reflects the efficiency of the DDE feature vector in enhancing the linear B-cell epitope prediction performance, compared to other feature representations. The proposed method is made as a stand-alone tool available freely for researchers, particularly for those interested in vaccine design and novel molecular target development for systems therapeutics and diagnostics: https://github.com/brsaran/LBEEP.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.
Kim, Ji-Sik; Kim, Gi-Woo
2017-01-01
This paper provides a preliminary study on the hysteresis compensation of a piezoresistive silicon-based polymer composite, poly(dimethylsiloxane) dispersed with carbon nanotubes (CNTs), to demonstrate its feasibility as a conductive composite (i.e., a force-sensitive resistor) for force sensors. In this study, the potential use of the nanotube/polydimethylsiloxane (CNT/PDMS) as a force sensor is evaluated for the first time. The experimental results show that the electrical resistance of the CNT/PDMS composite changes in response to sinusoidal loading and static compressive load. The compensated output based on the Duhem hysteresis model shows a linear relationship. This simple hysteresis model can compensate for the nonlinear frequency-dependent hysteresis phenomenon when a dynamic sinusoidal force input is applied. PMID:28125046
Wang, Li; Wang, Xiaoyi; Jin, Xuebo; Xu, Jiping; Zhang, Huiyan; Yu, Jiabin; Sun, Qian; Gao, Chong; Wang, Lingbin
2017-03-01
The formation process of algae is described inaccurately and water blooms are predicted with a low precision by current methods. In this paper, chemical mechanism of algae growth is analyzed, and a correlation analysis of chlorophyll-a and algal density is conducted by chemical measurement. Taking into account the influence of multi-factors on algae growth and water blooms, the comprehensive prediction method combined with multivariate time series and intelligent model is put forward in this paper. Firstly, through the process of photosynthesis, the main factors that affect the reproduction of the algae are analyzed. A compensation prediction method of multivariate time series analysis based on neural network and Support Vector Machine has been put forward which is combined with Kernel Principal Component Analysis to deal with dimension reduction of the influence factors of blooms. Then, Genetic Algorithm is applied to improve the generalization ability of the BP network and Least Squares Support Vector Machine. Experimental results show that this method could better compensate the prediction model of multivariate time series analysis which is an effective way to improve the description accuracy of algae growth and prediction precision of water blooms.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.; Lawson, C. L.
1988-01-01
Routines provided for common scalar, vector, matrix, and quaternion operations. Computer program extends Ada programming language to include linear-algebra capabilities similar to HAS/S programming language. Designed for such avionics applications as software for Space Station.
Pixel-By Estimation of Scene Motion in Video
NASA Astrophysics Data System (ADS)
Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.
2017-05-01
The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.
D'Costa, Susan; Blouin, Veronique; Broucque, Frederic; Penaud-Budloo, Magalie; François, Achille; Perez, Irene C; Le Bec, Christine; Moullier, Philippe; Snyder, Richard O; Ayuso, Eduard
2016-01-01
Clinical trials using recombinant adeno-associated virus (rAAV) vectors have demonstrated efficacy and a good safety profile. Although the field is advancing quickly, vector analytics and harmonization of dosage units are still a limitation for commercialization. AAV reference standard materials (RSMs) can help ensure product safety by controlling the consistency of assays used to characterize rAAV stocks. The most widely utilized unit of vector dosing is based on the encapsidated vector genome. Quantitative polymerase chain reaction (qPCR) is now the most common method to titer vector genomes (vg); however, significant inter- and intralaboratory variations have been documented using this technique. Here, RSMs and rAAV stocks were titered on the basis of an inverted terminal repeats (ITRs) sequence-specific qPCR and we found an artificial increase in vg titers using a widely utilized approach. The PCR error was introduced by using single-cut linearized plasmid as the standard curve. This bias was eliminated using plasmid standards linearized just outside the ITR region on each end to facilitate the melting of the palindromic ITR sequences during PCR. This new "Free-ITR" qPCR delivers vg titers that are consistent with titers obtained with transgene-specific qPCR and could be used to normalize in-house product-specific AAV vector standards and controls to the rAAV RSMs. The free-ITR method, including well-characterized controls, will help to calibrate doses to compare preclinical and clinical data in the field.
Computation of output feedback gains for linear stochastic systems using the Zangnill-Powell Method
NASA Technical Reports Server (NTRS)
Kaufman, H.
1975-01-01
Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok
1990-01-01
A procedure for compensating for the effects of distributed network-induced delays in integrated communication and control systems (ICCS) is proposed. The problem of analyzing systems with time-varying and possibly stochastic delays could be circumvented by use of a deterministic observer which is designed to perform under certain restrictive but realistic assumptions. The proposed delay-compensation algorithm is based on a deterministic state estimator and a linear state-variable-feedback control law. The deterministic observer can be replaced by a stochastic observer without any structural modifications of the delay compensation algorithm. However, if a feedforward-feedback control law is chosen instead of the state-variable feedback control law, the observer must be modified as a conventional nondelayed system would be. Under these circumstances, the delay compensation algorithm would be accordingly changed. The separation principle of the classical Luenberger observer holds true for the proposed delay compensator. The algorithm is suitable for ICCS in advanced aircraft, spacecraft, manufacturing automation, and chemical process applications.
Ganesh, Sri; Brar, Sheetal; Pawar, Archana
2017-08-01
To study the safety, efficacy, and outcomes of manual cyclotorsion compensation in small incision lenticule extraction (SMILE) for myopic astigmatism. Eligible patients with myopia from -1.00 to -10.00 diopters (D) spherical equivalent with a minimum astigmatism of 0.75 D undergoing SMILE were included. Intraoperative cyclotorsion compensation was performed by gently rotating the cone and aligning the 0° to 180° limbal marks with the horizontal axis of the reticule of the right eye piece of the microscope of the femtosecond laser after activating the suction. In this study, 81 left eyes from 81 patients were analyzed for vector analysis of astigmatism. The mean cyclotorsion was 5.64° ± 2.55° (range: 2° to 12°). No significant differences were found for surgically induced astigmatism, difference vector, angle of error (AE), correction index, magnitude of error, index of success (IOS), and flattening index between 2 weeks and 3 months postoperatively (P > .05). The eyes were categorized into low (≤ 1.50 D, n = 37) and high (> 1.50 D, n = 44) cylinder groups. At 3 months, intergroup analysis showed a comparable correction index of 0.97 for the low and 0.93 for the high cylinder groups, suggesting a slight undercorrection of 3% and 7%, respectively (P = .14). However, the AE and IOS were significantly lower in the high compared to the low cylinder group (P = .032 and .024 for AE and IOS, respectively), suggesting better alignment of the treatment in the high cylinder group. However, the mean uncorrected distance visual acuity of both groups was comparable (P = .21), suggesting good visual outcomes in the low cylinder group despite a less favorable IOS. Manual compensation may be a safe, feasible, and effective approach to refine the results of astigmatism with SMILE, especially in higher degrees of cylinders. [J Refract Surg. 2017;33(8):506-512.]. Copyright 2017, SLACK Incorporated.
Vectorization of linear discrete filtering algorithms
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1977-01-01
Linear filters, including the conventional Kalman filter and versions of square root filters devised by Potter and Carlson, are studied for potential application on streaming computers. The square root filters are known to maintain a positive definite covariance matrix in cases in which the Kalman filter diverges due to ill-conditioning of the matrix. Vectorization of the filters is discussed, and comparisons are made of the number of operations and storage locations required by each filter. The Carlson filter is shown to be the most efficient of the filters on the Control Data STAR-100 computer.
Rotman Lens Sidewall Design and Optimization with Hybrid Hardware/Software Based Programming
2015-01-09
conventional MoM and stored in memory. The components of Zfar are computed as needed through a fast matrix vector multiplication ( MVM ), which...V vector. Iterative methods, e.g. BiCGSTAB, are employed for solving the linear equation. The matrix-vector multiplications ( MVMs ), which dominate...most of the computation in the solving phase, consists of calculating near and far MVMs . The far MVM comprises aggregation, translation, and
Wang, Wei; Takeda, Mitsuo
2006-09-01
A new concept of vector and tensor densities is introduced into the general coherence theory of vector electromagnetic fields that is based on energy and energy-flow coherence tensors. Related coherence conservation laws are presented in the form of continuity equations that provide new insights into the propagation of second-order correlation tensors associated with stationary random classical electromagnetic fields.
The role of model dynamics in ensemble Kalman filter performance for chaotic systems
Ng, G.-H.C.; McLaughlin, D.; Entekhabi, D.; Ahanin, A.
2011-01-01
The ensemble Kalman filter (EnKF) is susceptible to losing track of observations, or 'diverging', when applied to large chaotic systems such as atmospheric and ocean models. Past studies have demonstrated the adverse impact of sampling error during the filter's update step. We examine how system dynamics affect EnKF performance, and whether the absence of certain dynamic features in the ensemble may lead to divergence. The EnKF is applied to a simple chaotic model, and ensembles are checked against singular vectors of the tangent linear model, corresponding to short-term growth and Lyapunov vectors, corresponding to long-term growth. Results show that the ensemble strongly aligns itself with the subspace spanned by unstable Lyapunov vectors. Furthermore, the filter avoids divergence only if the full linearized long-term unstable subspace is spanned. However, short-term dynamics also become important as non-linearity in the system increases. Non-linear movement prevents errors in the long-term stable subspace from decaying indefinitely. If these errors then undergo linear intermittent growth, a small ensemble may fail to properly represent all important modes, causing filter divergence. A combination of long and short-term growth dynamics are thus critical to EnKF performance. These findings can help in developing practical robust filters based on model dynamics. ?? 2011 The Authors Tellus A ?? 2011 John Wiley & Sons A/S.
Is There Really a Spin Crisis?
NASA Astrophysics Data System (ADS)
Qing, Di; Chen, XiangSong; Su, WeiNing; Wang, Fan
1999-10-01
The matrix element of quark axial vector current is shown to be different from the nonrelativistic quark spin sum for a nucleon at rest. The nucleon spin content discovered in polarized deep inelastic scattering is shown to be accommodated in a constituent quark model with 15% sea quark component mixing. The relativistic correction and sea quark pair excitation inherently related to quark axial vector current reduce the nucleon axial charge and this reduction is compensated by the relativistic quark orbital angular momentum exactly and in turn keeps the nucleon spin 1/2 untouched. Nucleon tensor charge has similar but smaller relativistic and sea quark pair excitation reduction. The project supported in part by the NSF (19675018), SED and SSTD of China
NASA Technical Reports Server (NTRS)
Bommier, V.
1986-01-01
The Hanle effect is the modification of the linear polarization parameters of a spectral line due to the effect of the magnetic field. It has been successfully applied to the magnetic field vector diagnostic in solar prominences. The magnetic field vector is determined by comparing the measured polarization to the polarization computed, taking into account all the polarizing and depolarizing processes in line formation and the depolarizing effect of the magnetic field. The method was applied to simultaneous polarization measurements in the Helium D3 line and in the hydrogen beta line in 14 prominences. Four polarization parameters are measured, which lead to the determination of the three coordinates of the magnetic field vector and the electron density, owing to the sensitivity of the hydrogen beta line to the non-negligible effect of depolarizing collisions with electrons and protons of the medium. A mean value of 1.3 x 10 to the 10th power cu. cm. is derived in 14 prominences.
Identification and compensation of friction for a novel two-axis differential micro-feed system
NASA Astrophysics Data System (ADS)
Du, Fuxin; Zhang, Mingyang; Wang, Zhaoguo; Yu, Chen; Feng, Xianying; Li, Peigang
2018-06-01
Non-linear friction in a conventional drive feed system (CDFS) feeding at low speed is one of the main factors that lead to the complexity of the feed drive. The CDFS will inevitably enter or approach a non-linear creeping work area at extremely low speed. A novel two-axis differential micro-feed system (TDMS) is developed in this paper to overcome the accuracy limitation of CDFS. A dynamic model of TDMS is first established. Then, a novel all-component friction parameter identification method (ACFPIM) using a genetic algorithm (GA) to identify the friction parameters of a TDMS is introduced. The friction parameters of the ball screw and linear motion guides are identified independently using the method, assuring the accurate modelling of friction force at all components. A proportional-derivate feed drive position controller with an observer-based friction compensator is implemented to achieve an accurate trajectory tracking performance. Finally, comparative experiments demonstrate the effectiveness of the TDMS in inhibiting the disadvantageous influence of non-linear friction and the validity of the proposed identification method for TDMS.
1998-09-01
potential of the surface wave electromagnetic field; ea is the unit of the polarization vectors : ex = ela. = e2x= (qx/\\q\\)\\/L\\q\\/(ei + e0), ely... polarization basis of the incident wave: EB°=eB^(/kr), (1) where e„ is the cyclic unit vector , n = ±1, k is the wave vector . The equation describing...rectangular grid. From the direction determined by wave vector k0, the plane electromagnetic wave of linear polarization incidents onto the array. It
Vector optical activity in the Weyl semimetal TaAs
Norman, M. R.
2015-12-15
Here, it is shown that the Weyl semimetal TaAs can have a significant polar vector contribution to its optical activity. This is quantified by ab initio calculations of the resonant x-ray diffraction at the Ta L1 edge. For the Bragg vector (400), this polar vector contribution to the circular intensity differential between left and right polarized x-rays is predicted to be comparable to that arising from linear dichroism. Implications this result has in regards to optical effects predicted for topological Weyl semimetals are discussed.
Design and application of quadrature compensation patterns in bulk silicon micro-gyroscopes.
Ni, Yunfang; Li, Hongsheng; Huang, Libin
2014-10-29
This paper focuses on the detailed design issues of a peculiar quadrature reduction method named system stiffness matrix diagonalization, whose key technology is the design and application of quadrature compensation patterns. For bulk silicon micro-gyroscopes, a complete design and application case was presented. The compensation principle was described first. In the mechanical design, four types of basic structure units were presented to obtain the basic compensation function. A novel layout design was proposed to eliminate the additional disturbing static forces and torques. Parameter optimization was carried out to maximize the available compensation capability in a limited layout area. Two types of voltage loading methods were presented. Their influences on the sense mode dynamics were analyzed. The proposed design was applied on a dual-mass silicon micro-gyroscope developed in our laboratory. The theoretical compensation capability of a quadrature equivalent angular rate no more than 412 °/s was designed. In experiments, an actual quadrature equivalent angular rate of 357 °/s was compensated successfully. The actual compensation voltages were a little larger than the theoretical ones. The correctness of the design and the theoretical analyses was verified. They can be commonly used in planar linear vibratory silicon micro-gyroscopes for quadrature compensation purpose.
Blending Velocities In Task Space In Computing Robot Motions
NASA Technical Reports Server (NTRS)
Volpe, Richard A.
1995-01-01
Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.
Attitude estimation from magnetometer and earth-albedo-corrected coarse sun sensor measurements
NASA Astrophysics Data System (ADS)
Appel, Pontus
2005-01-01
For full 3-axes attitude determination the magnetic field vector and the Sun vector can be used. A Coarse Sun Sensor consisting of six solar cells placed on each of the six outer surfaces of the satellite is used for Sun vector determination. This robust and low cost setup is sensitive to surrounding light sources as it sees the whole sky. To compensate for the largest error source, the Earth, an albedo model is developed. The total albedo light vector has contributions from the Earth surface which is illuminated by the Sun and visible from the satellite. Depending on the reflectivity of the Earth surface, the satellite's position and the Sun's position the albedo light changes. This cannot be calculated analytically and hence a numerical model is developed. For on-board computer use the Earth albedo model consisting of data tables is transferred into polynomial functions in order to save memory space. For an absolute worst case the attitude determination error can be held below 2∘. In a nominal case it is better than 1∘.
NASA Astrophysics Data System (ADS)
Tamilarasan, Ilavarasan; Saminathan, Brindha; Murugappan, Meenakshi
2016-04-01
The past decade has seen the phenomenal usage of orthogonal frequency division multiplexing (OFDM) in the wired as well as wireless communication domains, and it is also proposed in the literature as a future proof technique for the implementation of flexible resource allocation in cognitive optical networks. Fiber impairment assessment and adaptive compensation becomes critical in such implementations. A comprehensive analytical model for impairments in OFDM-based fiber links is developed. The proposed model includes the combined impact of laser phase fluctuations, fiber dispersion, self phase modulation, cross phase modulation, four-wave mixing, the nonlinear phase noise due to the interaction of amplified spontaneous emission with fiber nonlinearities, and the photodetector noises. The bit error rate expression for the proposed model is derived based on error vector magnitude estimation. The performance analysis of the proposed model is presented and compared for dispersion compensated and uncompensated backbone/backhaul links. The results suggest that OFDM would perform better for uncompensated links than the compensated links due to the negligible FWM effects and there is a need for flexible compensation. The proposed model can be employed in cognitive optical networks for accurate assessment of fiber-related impairments.
NASA Technical Reports Server (NTRS)
Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.
2003-01-01
A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.
SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction
NASA Astrophysics Data System (ADS)
Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang
2010-08-01
A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.
Strain-rate/temperature behavior of high density polyethylene in compression
NASA Technical Reports Server (NTRS)
Clements, L. L.; Sherby, O. D.
1978-01-01
The compressive strain rate/temperature behavior of highly linear, high density polyethylene was analyzed in terms of the predictive relations developed for metals and other crystalline materials. For strains of 5 percent and above, the relationship between applied strain rate, dotted epsilon, and resulting flow stress, sigma, was found to be: dotted epsilon exp times (Q sub f/RT) = k'(sigma/sigma sub c) to the nth power; the left-hand side is the activation-energy-compensated strain rate, where Q sub f is activation energy for flow, R is gas constant, and T is temperature; k is a constant, n is temperature-independent stress exponent, and sigma/sigma sub c is structure-compensated stress. A master curve resulted from a logarithmic plot of activation-energy-compensated strain rate versus structure-compensated stress.
NASA Astrophysics Data System (ADS)
Le Gonidec, Y.; Sarout, J.; Wassermann, J.; Nussbaum, C.
2014-07-01
We report in this paper an original analysis of microseismic events (MSEs) induced by an excavation operation in the clay environment of the Mont Terri underground rock laboratory. In order to identify the MSEs with confidence, we develop a restrictive but efficient multistep method for filtering the recorded events. We deduce the spatial distribution and processes associated with the excavation-induced damage from the spatial location and focal mechanisms of the MSEs. We observe an asymmetric geometry of the excavation damaged zone around the excavated gallery, without notable microseismic activity in the sandy facies sidewall, in contrast with the shaly facies sidewall where a first burst of events is recorded, followed by two smaller bursts: one locates ahead of the excavation front and is associated with a dominant double-couple component, suggesting bedding plane reworking, that is, shear fracture mode, and the MSEs of the other cluster inside the shaly sidewall of the gallery, with a dominant compensated linear vector dipole component, suggesting extensive cracking. We identify and discuss four major factors that seem to control the MSEs source mechanisms: lithology, geometry of the geological features, gallery orientation and direction of the main compressive stress.
Image processing methods to compensate for IFOV errors in microgrid imaging polarimeters
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; Boger, James K.; Fetrow, Matthew P.; Tyo, J. Scott; Black, Wiley T.
2006-05-01
Long-wave infrared imaging Stokes vector polarimeters are used in many remote sensing applications. Imaging polarimeters require that several measurements be made under optically different conditions in order to estimate the polarization signature at a given scene point. This multiple-measurement requirement introduces error in the signature estimates, and the errors differ depending upon the type of measurement scheme used. Here, we investigate a LWIR linear microgrid polarimeter. This type of instrument consists of a mosaic of micropolarizers at different orientations that are masked directly onto a focal plane array sensor. In this scheme, each polarization measurement is acquired spatially and hence each is made at a different point in the scene. This is a significant source of error, as it violates the requirement that each polarization measurement have the same instantaneous field-of-view (IFOV). In this paper, we first study the amount of error introduced by the IFOV handicap in microgrid instruments. We then proceed to investigate means for mitigating the effects of these errors to improve the quality of polarimetric imagery. In particular, we examine different interpolation schemes and gauge their performance. These studies are completed through the use of both real instrumental and modeled data.
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Karlgaard, Christopher D.; Kumar, Renjith R.; Seywald, Hans; Bose, David M.
2003-01-01
Several laws are designed for simultaneous control of the orientation of an Earth-pointing spacecraft, the energy stored by counter-rotating flywheels, and the angular momentum of the flywheels and control moment gyroscopes used together as an integrated set of actuators for attitude control. General, nonlinear equations of motion are presented in vector-dyadic form, and used to obtain approximate expressions which are then linearized in preparation for design of control laws that include feedback of flywheel kinetic energy error as a means of compensating for damping exerted by rotor bearings. Two flywheel steering laws are developed such that torque commanded by an attitude control law is achieved while energy is stored or discharged at the required rate. Using the International Space Station as an example, numerical simulations are performed to demonstrate control about a torque equilibrium attitude, and illustrate the benefits of kinetic energy error feedback. Control laws for attitude hold are also developed, and used to show the amount of propellant that can be saved when flywheels assist the CMGs. Nonlinear control laws for large-angle slew maneuvers perform well, but excessive momentum is required to reorient a vehicle like the International Space Station.
NASA Astrophysics Data System (ADS)
Guo, Liwen
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, the simulation transport delay remains a problem. Because of the limitations shown in the three prominent existing delay compensators---the lead/lag filter, the McFarland compensator and the Sobiski/Cardullo predictor---new approaches of compensating the transport delay in a flight simulator have been developed. The first novel compensator is the adaptive predictor making use of the Kalman filter algorithm in a unique manner so that the predictor can provide accurately the desired amount of prediction, significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors it illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator's control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Piloted simulation tests were conducted for assessing the effectiveness of the two novel compensators in comparison to the McFarland predictor and no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. Four metrics---the glide slope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating on the handling qualities---were employed for the analyses. The overall analyses show that while the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator, the state space predictor is fairly superior for short delay and significantly superior for long delay to the McFarland compensator. The state space predictor also achieves better compensation than the adaptive predictor. The results of the evaluation on the effectiveness of these predictors in the piloted tests agree with those in the theoretical offline tests conducted with the recorded simulation aircraft states.
Supermodes in Coupled Multi-Core Waveguide Structures
2016-04-01
and therefore can be treated as linear polarization (LP) modes. In essence, the LP modes are scalar approximations of the vector mode fields and contain...field, including the discovery of optical discrete solitons , Bragg and vector solitons in fibers, nonlinear surface waves, and the discovery of self...increased for an isolated core, it can guide high-order modes. For optical fibers with low re- fractive index contrast, the vector modes are weakly guided
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
A comparison of two multi-variable integrator windup protection schemes
NASA Technical Reports Server (NTRS)
Mattern, Duane
1993-01-01
Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.
Reiner, Anton; Del Mar, Nobel; Zagvazdin, Yuri; Li, Chunyan; Fitzgerald, Malinda E C
2011-09-14
Choroidal vessels compensate for changes in systemic blood pressure (BP) so that choroidal blood flow (ChBF) remains stable over a BP range of approximately 40 mm Hg above and below basal. Because of the presumed importance of ChBF regulation for maintenance of retinal health, we investigated if ChBF compensation for BP fluctuation in pigeons fails with age. Transcleral laser Doppler flowmetry was used to measure ChBF during spontaneous BP fluctuation in anesthetized pigeons ranging in age from 0.5 to 17 years (pigeons can live approximately 20 years in captivity). ChBF in <8-year-old pigeons remained near 100% of basal ChBF at BPs ranging 40 mm Hg above and below basal BP (95 mm Hg). Baroregulation failed below approximately 50 mm Hg BP. In ≥8-year-old pigeons, ChBF compensation was absent at >90 mm Hg BP, with ChBF linearly following BP. Over the 60 to 90 mm Hg range, ChBF in ≥8-year-old pigeons was maintained at 60-70% of young basal ChBF. Below approximately 55 mm Hg, baroregulation again followed BP linearly. Age-related ChBF baroregulatory impairment occurs in pigeons, with ChBF linear with above-basal BP, and ChBF failing to adequately maintain ChBF during below-basal BP. Defective autonomic sympathetic and parasympathetic neurogenic control, or defective myogenic control, may cause these baroregulatory defects. In either case, overperfusion during high BP may cause oxidative injury to the outer retina, whereas underperfusion during low BP may result in deficient nutrient supply and waste removal, with both abnormalities contributing to age-related retinal pathology and vision loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, X; Sisniega, A; Zbijewski, W
Purpose: Visualization and quantification of coronary artery calcification and atherosclerotic plaque benefits from coronary artery motion (CAM) artifact elimination. This work applies a rigid linear motion model to a Volume of Interest (VoI) for estimating motion estimation and compensation of image degradation in Coronary Computed Tomography Angiography (CCTA). Methods: In both simulation and testbench experiments, translational CAM was generated by displacement of the imaging object (i.e. simulated coronary artery and explanted human heart) by ∼8 mm, approximating the motion of a main coronary branch. Rotation was assumed to be negligible. A motion degraded region containing a calcification was selected asmore » the VoI. Local residual motion was assumed to be rigid and linear over the acquisition window, simulating motion observed during diastasis. The (negative) magnitude of the image gradient of the reconstructed VoI was chosen as the motion estimation objective and was minimized with Covariance Matrix Adaptation Evolution Strategy (CMAES). Results: Reconstruction incorporated the estimated CAM yielded signification recovery of fine calcification structures as well as reduced motion artifacts within the selected local region. The compensated reconstruction was further evaluated using two image similarity metrics, the structural similarity index (SSIM) and Root Mean Square Error (RMSE). At the calcification site, the compensated data achieved a 3% increase in SSIM and a 91.2% decrease in RMSE in comparison with the uncompensated reconstruction. Conclusion: Results demonstrate the feasibility of our image-based motion estimation method exploiting a local rigid linear model for CAM compensation. The method shows promising preliminary results for the application of such estimation in CCTA. Further work will involve motion estimation of complex motion corrupted patient data acquired from clinical CT scanner.« less
Amplitude- and rise-time-compensated filters
Nowlin, Charles H.
1984-01-01
An amplitude-compensated rise-time-compensated filter for a pulse time-of-occurrence (TOOC) measurement system is disclosed. The filter converts an input pulse, having the characteristics of random amplitudes and random, non-zero rise times, to a bipolar output pulse wherein the output pulse has a zero-crossing time that is independent of the rise time and amplitude of the input pulse. The filter differentiates the input pulse, along the linear leading edge of the input pulse, and subtracts therefrom a pulse fractionally proportional to the input pulse. The filter of the present invention can use discrete circuit components and avoids the use of delay lines.
Error compensation of IQ modulator using two-dimensional DFT
NASA Astrophysics Data System (ADS)
Ohshima, Takashi; Maesaka, Hirokazu; Matsubara, Shinichi; Otake, Yuji
2016-06-01
It is important to precisely set and keep the phase and amplitude of an rf signal in the accelerating cavity of modern accelerators, such as an X-ray Free Electron Laser (XFEL) linac. In these accelerators an acceleration rf signal is generated or detected by an In-phase and Quadrature (IQ) modulator, or a demodulator. If there are any deviations of the phase and the amplitude from the ideal values, crosstalk between the phase and the amplitude of the output signal of the IQ modulator or the demodulator arises. This causes instability of the feedback controls that simultaneously stabilize both the rf phase and the amplitude. To compensate for such deviations, we developed a novel compensation method using a two-dimensional Discrete Fourier Transform (DFT). Because the observed deviations of the phase and amplitude of an IQ modulator involve sinusoidal and polynomial behaviors on the phase angle and the amplitude of the rf vector, respectively, the DFT calculation with these basis functions makes a good approximation with a small number of compensation coefficients. Also, we can suppress high-frequency noise components arising when we measure the deviation data. These characteristics have advantages compared to a Look Up Table (LUT) compensation method. The LUT method usually demands many compensation elements, such as about 300, that are not easy to treat. We applied the DFT compensation method to the output rf signal of a C-band IQ modulator at SACLA, which is an XFEL facility in Japan. The amplitude deviation of the IQ modulator after the DFT compensation was reduced from 15.0% at the peak to less than 0.2% at the peak for an amplitude control range of from 0.1 V to 0.9 V (1.0 V full scale) and for a phase control range from 0 degree to 360 degrees. The number of compensation coefficients is 60, which is smaller than that of the LUT method, and is easy to treat and maintain.
USDA-ARS?s Scientific Manuscript database
This study evaluated linear spectral unmixing (LSU), mixture tuned matched filtering (MTMF) and support vector machine (SVM) techniques for detecting and mapping giant reed (Arundo donax L.), an invasive weed that presents a severe threat to agroecosystems and riparian areas throughout the southern ...
Demonstrating the Direction of Angular Velocity in Circular Motion
ERIC Educational Resources Information Center
Demircioglu, Salih; Yurumezoglu, Kemal; Isik, Hakan
2015-01-01
Rotational motion is ubiquitous in nature, from astronomical systems to household devices in everyday life to elementary models of atoms. Unlike the tangential velocity vector that represents the instantaneous linear velocity (magnitude and direction), an angular velocity vector is conceptually more challenging for students to grasp. In physics…
NASA Technical Reports Server (NTRS)
Folta, David C.; Carpenter, J. Russell
1999-01-01
A decentralized control is investigated for applicability to the autonomous formation flying control algorithm developed by GSFC for the New Millenium Program Earth Observer-1 (EO-1) mission. This decentralized framework has the following characteristics: The approach is non-hierarchical, and coordination by a central supervisor is not required; Detected failures degrade the system performance gracefully; Each node in the decentralized network processes only its own measurement data, in parallel with the other nodes; Although the total computational burden over the entire network is greater than it would be for a single, centralized controller, fewer computations are required locally at each node; Requirements for data transmission between nodes are limited to only the dimension of the control vector, at the cost of maintaining a local additional data vector. The data vector compresses all past measurement history from all the nodes into a single vector of the dimension of the state; and The approach is optimal with respect to standard cost functions. The current approach is valid for linear time-invariant systems only. Similar to the GSFC formation flying algorithm, the extension to linear LQG time-varying systems requires that each node propagate its filter covariance forward (navigation) and controller Riccati matrix backward (guidance) at each time step. Extension of the GSFC algorithm to non-linear systems can also be accomplished via linearization about a reference trajectory in the standard fashion, or linearization about the current state estimate as with the extended Kalman filter. To investigate the feasibility of the decentralized integration with the GSFC algorithm, an existing centralized LQG design for a single spacecraft orbit control problem is adapted to the decentralized framework while using the GSFC algorithm's state transition matrices and framework. The existing GSFC design uses both reference trajectories of each spacecraft in formation and by appropriate choice of coordinates and simplified measurement modeling is formulated as a linear time-invariant system. Results for improvements to the GSFC algorithm and a multiple satellite formation will be addressed. The goal of this investigation is to progressively relax the assumptions that result in linear time-invariance, ultimately to the point of linearization of the non-linear dynamics about the current state estimate as in the extended Kalman filter. An assessment will then be made about the feasibility of the decentralized approach to the realistic formation flying application of the EO-1/Landsat 7 formation flying experiment.
L 1-2 minimization for exact and stable seismic attenuation compensation
NASA Astrophysics Data System (ADS)
Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang
2018-06-01
Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.
Mafusire, Cosmas; Krüger, Tjaart P J
2018-06-01
The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.
NASA Astrophysics Data System (ADS)
Chen, Jiangwei; Dai, Yuyao; Yan, Lin; Zhao, Huimin
2018-04-01
In this paper, we shall demonstrate theoretically that steady bound electromagnetic eigenstate can arise in an infinite homogeneous isotropic linear metamaterial with zero-real-part-of-impedance and nonzero-imaginary-part-of-wave-vector, which is partly attributed to that, here, nonzero-imaginary-part-of-wave-vector is not involved with energy losses or gain. Altering value of real-part-of-impedance of the metamaterial, the bound electromagnetic eigenstate may become to be a progressive wave. Our work may be useful to further understand energy conversion and conservation properties of electromagnetic wave in the dispersive and absorptive medium and provides a feasible route to stop, store and release electromagnetic wave (light) conveniently by using metamaterial with near-zero-real-part-of-impedance.
NASA Astrophysics Data System (ADS)
Luo, Ya-Zhong; Zhang, Jin; Li, Hai-yang; Tang, Guo-Jin
2010-08-01
In this paper, a new optimization approach combining primer vector theory and evolutionary algorithms for fuel-optimal non-linear impulsive rendezvous is proposed. The optimization approach is designed to seek the optimal number of impulses as well as the optimal impulse vectors. In this optimization approach, adding a midcourse impulse is determined by an interactive method, i.e. observing the primer-magnitude time history. An improved version of simulated annealing is employed to optimize the rendezvous trajectory with the fixed-number of impulses. This interactive approach is evaluated by three test cases: coplanar circle-to-circle rendezvous, same-circle rendezvous and non-coplanar rendezvous. The results show that the interactive approach is effective and efficient in fuel-optimal non-linear rendezvous design. It can guarantee solutions, which satisfy the Lawden's necessary optimality conditions.
NASA Technical Reports Server (NTRS)
Jezewski, D.
1980-01-01
Prime vector theory is used in analyzing a set of linear relative-motion equations - the Clohessy-Wiltshire (C/W) equations - to determine the criteria and necessary conditions for an optimal N-impulse trajectory. The analysis develops the analytical criteria for improving a solution by: (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of: (1) fixed-end conditions, two impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized renezvous problem.
RF pulse shape control in the compact linear collider test facility
NASA Astrophysics Data System (ADS)
Kononenko, Oleksiy; Corsini, Roberto
2018-07-01
The Compact Linear Collider (CLIC) is a study for an electron-positron machine aiming at accelerating and colliding particles at the next energy frontier. The CLIC concept is based on the novel two-beam acceleration scheme, where a high-current low-energy drive beam generates RF in series of power extraction and transfer structures accelerating the low-current main beam. To compensate for the transient beam-loading and meet the energy spread specification requirements for the main linac, the RF pulse shape must be carefully optimized. This was recently modelled by varying the drive beam phase switch times in the sub-harmonic buncher so that, when combined, the drive beam modulation translates into the required voltage modulation of the accelerating pulse. In this paper, the control over the RF pulse shape with the phase switches, that is crucial for the success of the developed compensation model, is studied. The results on the experimental verification of this control method are presented and a good agreement with the numerical predictions is demonstrated. Implications for the CLIC beam-loading compensation model are also discussed.
Malekiha, Mahdi; Tselniker, Igor; Plant, David V
2016-02-22
In this work, we propose and experimentally demonstrate a novel low-complexity technique for fiber nonlinearity compensation. We achieved a transmission distance of 2818 km for a 32-GBaud dual-polarization 16QAM signal. For efficient implantation, and to facilitate integration with conventional digital signal processing (DSP) approaches, we independently compensate fiber nonlinearities after linear impairment equalization. Therefore this algorithm can be easily implemented in currently deployed transmission systems after using linear DSP. The proposed equalizer operates at one sample per symbol and requires only one computation step. The structure of the algorithm is based on a first-order perturbation model with quantized perturbation coefficients. Also, it does not require any prior calculation or detailed knowledge of the transmission system. We identified common symmetries between perturbation coefficients to avoid duplicate and unnecessary operations. In addition, we use only a few adaptive filter coefficients by grouping multiple nonlinear terms and dedicating only one adaptive nonlinear filter coefficient to each group. Finally, the complexity of the proposed algorithm is lower than previously studied nonlinear equalizers by more than one order of magnitude.
NASA Technical Reports Server (NTRS)
Coon, Craig R.; Cardullo, Frank M.; Zaychik, Kirill B.
2014-01-01
The ability to develop highly advanced simulators is a critical need that has the ability to significantly impact the aerospace industry. The aerospace industry is advancing at an ever increasing pace and flight simulators must match this development with ever increasing urgency. In order to address both current problems and potential advancements with flight simulator techniques, several aspects of current control law technology of the National Aeronautics and Space Administration (NASA) Langley Research Center's Cockpit Motion Facility (CMF) motion base simulator were examined. Preliminary investigation of linear models based upon hardware data were examined to ensure that the most accurate models are used. This research identified both system improvements in the bandwidth and more reliable linear models. Advancements in the compensator design were developed and verified through multiple techniques. The position error rate feedback, the acceleration feedback and the force feedback were all analyzed in the heave direction using the nonlinear model of the hardware. Improvements were made using the position error rate feedback technique. The acceleration feedback compensator also provided noteworthy improvement, while attempts at implementing a force feedback compensator proved unsuccessful.
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Linear phase conjugation for atmospheric aberration compensation
NASA Astrophysics Data System (ADS)
Grasso, Robert J.; Stappaerts, Eddy A.
1998-01-01
Atmospheric induced aberrations can seriously degrade laser performance, greatly affecting the beam that finally reaches the target. Lasers propagated over any distance in the atmosphere suffer from a significant decrease in fluence at the target due to these aberrations. This is especially so for propagation over long distances. It is due primarily to fluctuations in the atmosphere over the propagation path, and from platform motion relative to the intended aimpoint. Also, delivery of high fluence to the target typically requires low beam divergence, thus, atmospheric turbulence, platform motion, or both results in a lack of fine aimpoint control to keep the beam directed at the target. To improve both the beam quality and amount of laser energy delivered to the target, Northrop Grumman has developed the Active Tracking System (ATS); a novel linear phase conjugation aberration compensation technique. Utilizing a silicon spatial light modulator (SLM) as a dynamic wavefront reversing element, ATS undoes aberrations induced by the atmosphere, platform motion or both. ATS continually tracks the target as well as compensates for atmospheric and platform motion induced aberrations. This results in a high fidelity, near-diffraction limited beam delivered to the target.
NASA Astrophysics Data System (ADS)
Shastri, Niket; Pathak, Kamlesh
2018-05-01
The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.
Einstein-aether theory with a Maxwell field: General formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balakin, Alexander B., E-mail: Alexander.Balakin@kpfu.ru; Lemos, José P.S., E-mail: joselemos@ist.utl.pt
We extend the Einstein-aether theory to include the Maxwell field in a nontrivial manner by taking into account its interaction with the time-like unit vector field characterizing the aether. We also include a generic matter term. We present a model with a Lagrangian that includes cross-terms linear and quadratic in the Maxwell tensor, linear and quadratic in the covariant derivative of the aether velocity four-vector, linear in its second covariant derivative and in the Riemann tensor. We decompose these terms with respect to the irreducible parts of the covariant derivative of the aether velocity, namely, the acceleration four-vector, the shearmore » and vorticity tensors, and the expansion scalar. Furthermore, we discuss the influence of an aether non-uniform motion on the polarization and magnetization of the matter in such an aether environment, as well as on its dielectric and magnetic properties. The total self-consistent system of equations for the electromagnetic and the gravitational fields, and the dynamic equations for the unit vector aether field are obtained. Possible applications of this system are discussed. Based on the principles of effective field theories, we display in an appendix all the terms up to fourth order in derivative operators that can be considered in a Lagrangian that includes the metric, the electromagnetic and the aether fields.« less
Hyperbolic-symmetry vector fields.
Gao, Xu-Zhen; Pan, Yue; Cai, Meng-Qiang; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian
2015-12-14
We present and construct a new kind of orthogonal coordinate system, hyperbolic coordinate system. We present and design a new kind of local linearly polarized vector fields, which is defined as the hyperbolic-symmetry vector fields because the points with the same polarization form a series of hyperbolae. We experimentally demonstrate the generation of such a kind of hyperbolic-symmetry vector optical fields. In particular, we also study the modified hyperbolic-symmetry vector optical fields with the twofold and fourfold symmetric states of polarization when introducing the mirror symmetry. The tight focusing behaviors of these vector fields are also investigated. In addition, we also fabricate micro-structures on the K9 glass surfaces by several tightly focused (modified) hyperbolic-symmetry vector fields patterns, which demonstrate that the simulated tightly focused fields are in good agreement with the fabricated micro-structures.
Lin, Jhih-Hong; Chiang, Mao-Hsiung
2016-08-25
Magnetic shape memory (MSM) alloys are a new class of smart materials with extraordinary strains up to 12% and frequencies in the range of 1 to 2 kHz. The MSM actuator is a potential device which can achieve high performance electromagnetic actuation by using the properties of MSM alloys. However, significant non-linear hysteresis behavior is a significant barrier to control the MSM actuator. In this paper, the Preisach model was used, by capturing experiments from different input signals and output responses, to model the hysteresis of MSM actuator, and the inverse Preisach model, as a feedforward control, provided compensational signals to the MSM actuator to linearize the hysteresis non-linearity. The control strategy for path tracking combined the hysteresis compensator and the modified fuzzy sliding mode control (MFSMC) which served as a path controller. Based on the experimental results, it was verified that a tracking error in the order of micrometers was achieved.
Lin, Jhih-Hong; Chiang, Mao-Hsiung
2016-01-01
Magnetic shape memory (MSM) alloys are a new class of smart materials with extraordinary strains up to 12% and frequencies in the range of 1 to 2 kHz. The MSM actuator is a potential device which can achieve high performance electromagnetic actuation by using the properties of MSM alloys. However, significant non-linear hysteresis behavior is a significant barrier to control the MSM actuator. In this paper, the Preisach model was used, by capturing experiments from different input signals and output responses, to model the hysteresis of MSM actuator, and the inverse Preisach model, as a feedforward control, provided compensational signals to the MSM actuator to linearize the hysteresis non-linearity. The control strategy for path tracking combined the hysteresis compensator and the modified fuzzy sliding mode control (MFSMC) which served as a path controller. Based on the experimental results, it was verified that a tracking error in the order of micrometers was achieved. PMID:27571081
Flexible Modes Control Using Sliding Mode Observers: Application to Ares I
NASA Technical Reports Server (NTRS)
Shtessel, Yuri B.; Hall, Charles E.; Baev, Simon; Orr, Jeb S.
2010-01-01
The launch vehicle dynamics affected by bending and sloshing modes are considered. Attitude measurement data that are corrupted by flexible modes could yield instability of the vehicle dynamics. Flexible body and sloshing modes are reconstructed by sliding mode observers. The resultant estimates are used to remove the undesirable dynamics from the measurements, and the direct effects of sloshing and bending modes on the launch vehicle are compensated by means of a controller that is designed without taking the bending and sloshing modes into account. A linearized mathematical model of Ares I launch vehicle was derived based on FRACTAL, a linear model developed by NASA/MSFC. The compensated vehicle dynamics with a simple PID controller were studied for the launch vehicle model that included two bending modes, two slosh modes and actuator dynamics. A simulation study demonstrated stable and accurate performance of the flight control system with the augmented simple PID controller without the use of traditional linear bending filters.
Ultra-Low-Dropout Linear Regulator
NASA Technical Reports Server (NTRS)
Thornton, Trevor; Lepkowski, William; Wilk, Seth
2011-01-01
A radiation-tolerant, ultra-low-dropout linear regulator can operate between -150 and 150 C. Prototype components were demonstrated to be performing well after a total ionizing dose of 1 Mrad (Si). Unlike existing components, the linear regulator developed during this activity is unconditionally stable over all operating regimes without the need for an external compensation capacitor. The absence of an external capacitor reduces overall system mass/volume, increases reliability, and lowers cost. Linear regulators generate a precisely controlled voltage for electronic circuits regardless of fluctuations in the load current that the circuit draws from the regulator.
Error Model and Compensation of Bell-Shaped Vibratory Gyro
Su, Zhong; Liu, Ning; Li, Qing
2015-01-01
A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593
Reconstructing matter profiles of spherically compensated cosmic regions in ΛCDM cosmology
NASA Astrophysics Data System (ADS)
de Fromont, Paul; Alimi, Jean-Michel
2018-02-01
The absence of a physically motivated model for large-scale profiles of cosmic voids limits our ability to extract valuable cosmological information from their study. In this paper, we address this problem by introducing the spherically compensated cosmic regions, named CoSpheres. Such cosmic regions are identified around local extrema in the density field and admit a unique compensation radius R1 where the internal spherical mass is exactly compensated. Their origin is studied by extending the standard peak model and implementing the compensation condition. Since the compensation radius evolves as the Universe itself, R1(t) ∝ a(t), CoSpheres behave as bubble Universes with fixed comoving volume. Using the spherical collapse model, we reconstruct their profiles with a very high accuracy until z = 0 in N-body simulations. CoSpheres are symmetrically defined and reconstructed for both central maximum (seeding haloes and galaxies) and minimum (identified with cosmic voids). We show that the full non-linear dynamics can be solved analytically around this particular compensation radius, providing useful predictions for cosmology. This formalism highlights original correlations between local extremum and their large-scale cosmic environment. The statistical properties of these spherically compensated cosmic regions and the possibilities to constrain efficiently both cosmology and gravity will be investigated in companion papers.
USSR and Eastern Europe Scientific Abstracts- Physics - Number 45
1978-10-02
compound, a function of the angle between the electrical vector of the ’ light wave and the optical c-axis of the crystal. Heterodiodes have first...of naturally radioactive U, Th and K in a 1-liter sample. USSR A VECTOR MESON IN A QUANTUM ELECTROMAGNETIC FIELD Moscow TEORETICHESKAYA I...arbitrary spin in a classical plane electromagnetic field are used to find the exact wave function of a vector meson in the quantum field of a linearly
Vector optical fields with polarization distributions similar to electric and magnetic field lines.
Pan, Yue; Li, Si-Min; Mao, Lei; Kong, Ling-Jun; Li, Yongnan; Tu, Chenghou; Wang, Pei; Wang, Hui-Tian
2013-07-01
We present, design and generate a new kind of vector optical fields with linear polarization distributions modeling to electric and magnetic field lines. The geometric configurations of "electric charges" and "magnetic charges" can engineer the spatial structure and symmetry of polarizations of vector optical field, providing additional degrees of freedom assisting in controlling the field symmetry at the focus and allowing engineering of the field distribution at the focus to the specific applications.
A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong
Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.
NASA Astrophysics Data System (ADS)
Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu
2018-05-01
Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin
2017-12-01
Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.
Tunable compensation of GVD-induced FM-AM conversion in the front end of high-power lasers.
Li, Rao; Fan, Wei; Jiang, Youen; Qiao, Zhi; Zhang, Peng; Lin, Zunqi
2017-02-01
Group velocity dispersion (GVD) is one of the main factors leading to frequency modulation (FM) to amplitude modulation (AM) conversion in the front end of high-power lasers. In order to compensate the FM-AM modulation, the influence of GVD, which is mainly induced by the phase filter effect, is theoretically investigated. Based on the theoretical analysis, a high-precision, high-stability, tunable GVD compensatory using gratings is designed and experimentally demonstrated. The results indicate that the compensator can be implemented in high-power laser facilities to compensate the GVD of fiber with a length between 200-500 m when the bandwidth of a phase-modulated laser is 0.34 nm or 0.58 nm and the central wavelength is in the range of 1052.3217-1053.6008 nm. Due to the linear relationship between the dispersion and the spacing distance of the gratings, the compensator can easily achieve closed-loop feedback controlling. The proposed GVD compensator promises significant applications in large laser facilities, especially in the future polarizing fiber front end of high-power lasers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durrer, Ruth; Tansella, Vittorio, E-mail: ruth.durrer@unige.ch, E-mail: vittorio.tansella@unige.ch
We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxymore » number counts.« less
Manifolds for pose tracking from monocular video
NASA Astrophysics Data System (ADS)
Basu, Saurav; Poulin, Joshua; Acton, Scott T.
2015-03-01
We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).
A hypothetical learning trajectory for conceptualizing matrices as linear transformations
NASA Astrophysics Data System (ADS)
Andrews-Larson, Christine; Wawro, Megan; Zandieh, Michelle
2017-08-01
In this paper, we present a hypothetical learning trajectory (HLT) aimed at supporting students in developing flexible ways of reasoning about matrices as linear transformations in the context of introductory linear algebra. In our HLT, we highlight the integral role of the instructor in this development. Our HLT is based on the 'Italicizing N' task sequence, in which students work to generate, compose, and invert matrices that correspond to geometric transformations specified within the problem context. In particular, we describe the ways in which the students develop local transformation views of matrix multiplication (focused on individual mappings of input vectors to output vectors) and extend these local views to more global views in which matrices are conceptualized in terms of how they transform a space in a coordinated way.
Evaluation of a Nonlinear Finite Element Program - ABAQUS.
1983-03-15
anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has
A problem in non-linear Diophantine approximation
NASA Astrophysics Data System (ADS)
Harrap, Stephen; Hussain, Mumtaz; Kristensen, Simon
2018-05-01
In this paper we obtain the Lebesgue and Hausdorff measure results for the set of vectors satisfying infinitely many fully non-linear Diophantine inequalities. The set is associated with a class of linear inhomogeneous partial differential equations whose solubility depends on a certain Diophantine condition. The failure of the Diophantine condition guarantees the existence of a smooth solution.
Iterative color-multiplexed, electro-optical processor.
Psaltis, D; Casasent, D; Carlotto, M
1979-11-01
A noncoherent optical vector-matrix multiplier using a linear LED source array and a linear P-I-N photodiode detector array has been combined with a 1-D adder in a feedback loop. The resultant iterative optical processor and its use in solving simultaneous linear equations are described. Operation on complex data is provided by a novel color-multiplexing system.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
Quasi-eccentricity error modeling and compensation in vision metrology
NASA Astrophysics Data System (ADS)
Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin
2018-04-01
Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.
NASA Technical Reports Server (NTRS)
Title, A. M.
1978-01-01
Filter includes partial polarizer between birefrigent elements. Plastic film on partial polarizer compensates for any polarization rotation by partial polarizer. Two quarter-wave plates change incident, linearly polarized light into elliptically polarized light.
Linear nozzle with tailored gas plumes
Leon, David D.; Kozarek, Robert L.; Mansour, Adel; Chigier, Norman
2001-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
Linear nozzle with tailored gas plumes and method
Leon, David D.; Kozarek, Robert L.; Mansour, Adel; Chigier, Norman
1999-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
NASA Astrophysics Data System (ADS)
Zhou, Yanru; Zhao, Yuxiang; Tian, Hui; Zhang, Dengwei; Huang, Tengchao; Miao, Lijun; Shu, Xiaowu; Che, Shuangliang; Liu, Cheng
2016-12-01
In an axial magnetic field (AMF), which is vertical to the plane of the fiber coil, a polarization-maintaining fiber optic gyro (PM-FOG) appears as an axial magnetic error. This error is linearly related to the intensity of an AMF, the radius of the fiber coil, and the light wavelength, and also influenced by the distribution of fiber twist. When a PM-FOG is manufactured completely, this error only appears a linear correlation with the AMF. A real-time compensation model is established to eliminate the error, and the experimental results show that the axial magnetic error of the PM-FOG is decreased from 5.83 to 0.09 deg/h in 12G AMF with 18-dB suppression.
Linear nozzle with tailored gas plumes
Kozarek, Robert L.; Straub, William D.; Fischer, Joern E.; Leon, David D.
2003-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
Motion vector field phase-to-amplitude resampling for 4D motion-compensated cone-beam CT
NASA Astrophysics Data System (ADS)
Sauppe, Sebastian; Kuhm, Julian; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc
2018-02-01
We propose a phase-to-amplitude resampling (PTAR) method to reduce motion blurring in motion-compensated (MoCo) 4D cone-beam CT (CBCT) image reconstruction, without increasing the computational complexity of the motion vector field (MVF) estimation approach. PTAR is able to improve the image quality in reconstructed 4D volumes, including both regular and irregular respiration patterns. The PTAR approach starts with a robust phase-gating procedure for the initial MVF estimation and then switches to a phase-adapted amplitude gating method. The switch implies an MVF-resampling, which makes them amplitude-specific. PTAR ensures that the MVFs, which have been estimated on phase-gated reconstructions, are still valid for all amplitude-gated reconstructions. To validate the method, we use an artificially deformed clinical CT scan with a realistic breathing pattern and several patient data sets acquired with a TrueBeamTM integrated imaging system (Varian Medical Systems, Palo Alto, CA, USA). Motion blurring, which still occurs around the area of the diaphragm or at small vessels above the diaphragm in artifact-specific cyclic motion compensation (acMoCo) images based on phase-gating, is significantly reduced by PTAR. Also, small lung structures appear sharper in the images. This is demonstrated both for simulated and real patient data. A quantification of the sharpness of the diaphragm confirms these findings. PTAR improves the image quality of 4D MoCo reconstructions compared to conventional phase-gated MoCo images, in particular for irregular breathing patterns. Thus, PTAR increases the robustness of MoCo reconstructions for CBCT. Because PTAR does not require any additional steps for the MVF estimation, it is computationally efficient. Our method is not restricted to CBCT but could rather be applied to other image modalities.
Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU
NASA Astrophysics Data System (ADS)
Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.
1982-06-01
In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.
Polarization ellipse and Stokes parameters in geometric algebra.
Santos, Adler G; Sugon, Quirino M; McNamara, Daniel J
2012-01-01
In this paper, we use geometric algebra to describe the polarization ellipse and Stokes parameters. We show that a solution to Maxwell's equation is a product of a complex basis vector in Jackson and a linear combination of plane wave functions. We convert both the amplitudes and the wave function arguments from complex scalars to complex vectors. This conversion allows us to separate the electric field vector and the imaginary magnetic field vector, because exponentials of imaginary scalars convert vectors to imaginary vectors and vice versa, while exponentials of imaginary vectors only rotate the vector or imaginary vector they are multiplied to. We convert this expression for polarized light into two other representations: the Cartesian representation and the rotated ellipse representation. We compute the conversion relations among the representation parameters and their corresponding Stokes parameters. And finally, we propose a set of geometric relations between the electric and magnetic fields that satisfy an equation similar to the Poincaré sphere equation.
Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks
NASA Astrophysics Data System (ADS)
Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang
2016-01-01
The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.
NASA Astrophysics Data System (ADS)
Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol
2015-08-01
The paper deals with dynamic compensation of delayed Self Powered Flux Detectors (SPFDs) using discrete time H∞ filtering method for improving the response of SPFDs with significant delayed components such as Platinum and Vanadium SPFD. We also present a comparative study between the Linear Matrix Inequality (LMI) based H∞ filtering and Algebraic Riccati Equation (ARE) based Kalman filtering methods with respect to their delay compensation capabilities. Finally an improved recursive H∞ filter based on the adaptive fading memory technique is proposed which provides an improved performance over existing methods. The existing delay compensation algorithms do not account for the rate of change in the signal for determining the filter gain and therefore add significant noise during the delay compensation process. The proposed adaptive fading memory H∞ filter minimizes the overall noise very effectively at the same time keeps the response time at minimum values. The recursive algorithm is easy to implement in real time as compared to the LMI (or ARE) based solutions.
Fast and slowly evolving vector solitons in mode-locked fibre lasers.
Sergeyev, Sergey V
2014-10-28
We report on a new vector model of an erbium-doped fibre laser mode locked with carbon nanotubes. This model goes beyond the limitations of the previously used models based on either coupled nonlinear Schrödinger or Ginzburg-Landau equations. Unlike the previous models, it accounts for the vector nature of the interaction between an optical field and an erbium-doped active medium, slow relaxation dynamics of erbium ions, linear birefringence in a fibre, linear and circular birefringence of a laser cavity caused by in-cavity polarization controller and light-induced anisotropy caused by elliptically polarized pump field. Interplay of aforementioned factors changes coherent coupling of two polarization modes at a long time scale and so results in a new family of vector solitons (VSs) with fast and slowly evolving states of polarization. The observed VSs can be of interest in secure communications, trapping and manipulation of atoms and nanoparticles, control of magnetization in data storage devices and many other areas. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei
2018-01-01
Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.
Helicons in uniform fields. I. Wave diagnostics with hodograms
NASA Astrophysics Data System (ADS)
Urrutia, J. M.; Stenzel, R. L.
2018-03-01
The wave equation for whistler waves is well known and has been solved in Cartesian and cylindrical coordinates, yielding plane waves and cylindrical waves. In space plasmas, waves are usually assumed to be plane waves; in small laboratory plasmas, they are often assumed to be cylindrical "helicon" eigenmodes. Experimental observations fall in between both models. Real waves are usually bounded and may rotate like helicons. Such helicons are studied experimentally in a large laboratory plasma which is essentially a uniform, unbounded plasma. The waves are excited by loop antennas whose properties determine the field rotation and transverse dimensions. Both m = 0 and m = 1 helicon modes are produced and analyzed by measuring the wave magnetic field in three dimensional space and time. From Ampère's law and Ohm's law, the current density and electric field vectors are obtained. Hodograms for these vectors are produced. The sign ambiguity of the hodogram normal with respect to the direction of wave propagation is demonstrated. In general, electric and magnetic hodograms differ but both together yield the wave vector direction unambiguously. Vector fields of the hodogram normal yield the phase flow including phase rotation for helicons. Some helicons can have locally a linear polarization which is identified by the hodogram ellipticity. Alternatively the amplitude oscillation in time yields a measure for the wave polarization. It is shown that wave interference produces linear polarization. These observations emphasize that single point hodogram measurements are inadequate to determine the wave topology unless assuming plane waves. Observations of linear polarization indicate wave packets but not plane waves. A simple qualitative diagnostics for the wave polarization is the measurement of the magnetic field magnitude in time. Circular polarization has a constant amplitude; linear polarization results in amplitude modulations.
Hashimoto, Ken; Zúniga, Concepción; Romero, Eduardo; Morales, Zoraida; Maguire, James H
2015-01-01
Central American countries face a major challenge in the control of Triatoma dimidiata, a widespread vector of Chagas disease that cannot be eliminated. The key to maintaining the risk of transmission of Trypanosoma cruzi at lowest levels is to sustain surveillance throughout endemic areas. Guatemala, El Salvador, and Honduras integrated community-based vector surveillance into local health systems. Community participation was effective in detection of the vector, but some health services had difficulty sustaining their response to reports of vectors from the population. To date, no research has investigated how best to maintain and reinforce health service responsiveness, especially in resource-limited settings. We reviewed surveillance and response records of 12 health centers in Guatemala, El Salvador, and Honduras from 2008 to 2012 and analyzed the data in relation to the volume of reports of vector infestation, local geography, demography, human resources, managerial approach, and results of interviews with health workers. Health service responsiveness was defined as the percentage of households that reported vector infestation for which the local health service provided indoor residual spraying of insecticide or educational advice. Eight potential determinants of responsiveness were evaluated by linear and mixed-effects multi-linear regression. Health service responsiveness (overall 77.4%) was significantly associated with quarterly monitoring by departmental health offices. Other potential determinants of responsiveness were not found to be significant, partly because of short- and long-term strategies, such as temporary adjustments in manpower and redistribution of tasks among local participants in the effort. Consistent monitoring within the local health system contributes to sustainability of health service responsiveness in community-based vector surveillance of Chagas disease. Even with limited resources, countries can improve health service responsiveness with thoughtful strategies and management practices in the local health systems.
Optimal Cloning of PCR Fragments by Homologous Recombination in Escherichia coli
Jacobus, Ana Paula; Gross, Jeferson
2015-01-01
PCR fragments and linear vectors containing overlapping ends are easily assembled into a propagative plasmid by homologous recombination in Escherichia coli. Although this gap-repair cloning approach is straightforward, its existence is virtually unknown to most molecular biologists. To popularize this method, we tested critical parameters influencing the efficiency of PCR fragments cloning into PCR-amplified vectors by homologous recombination in the widely used E. coli strain DH5α. We found that the number of positive colonies after transformation increases with the length of overlap between the PCR fragment and linear vector. For most practical purposes, a 20 bp identity already ensures high-cloning yields. With an insert to vector ratio of 2:1, higher colony forming numbers are obtained when the amount of vector is in the range of 100 to 250 ng. An undesirable cloning background of empty vectors can be minimized during vector PCR amplification by applying a reduced amount of plasmid template or by using primers in which the 5′ termini are separated by a large gap. DpnI digestion of the plasmid template after PCR is also effective to decrease the background of negative colonies. We tested these optimized cloning parameters during the assembly of five independent DNA constructs and obtained 94% positive clones out of 100 colonies probed. We further demonstrated the efficient and simultaneous cloning of two PCR fragments into a vector. These results support the idea that homologous recombination in E. coli might be one of the most effective methods for cloning one or two PCR fragments. For its simplicity and high efficiency, we believe that recombinational cloning in E. coli has a great potential to become a routine procedure in most molecular biology-oriented laboratories. PMID:25774528
Families of vector-like deformations of relativistic quantum phase spaces, twists and symmetries
NASA Astrophysics Data System (ADS)
Meljanac, Daniel; Meljanac, Stjepan; Pikutić, Danijel
2017-12-01
Families of vector-like deformed relativistic quantum phase spaces and corresponding realizations are analyzed. A method for a general construction of the star product is presented. The corresponding twist, expressed in terms of phase space coordinates, in the Hopf algebroid sense is presented. General linear realizations are considered and corresponding twists, in terms of momenta and Poincaré-Weyl generators or gl(n) generators are constructed and R-matrix is discussed. A classification of linear realizations leading to vector-like deformed phase spaces is given. There are three types of spaces: (i) commutative spaces, (ii) κ -Minkowski spaces and (iii) κ -Snyder spaces. The corresponding star products are (i) associative and commutative (but non-local), (ii) associative and non-commutative and (iii) non-associative and non-commutative, respectively. Twisted symmetry algebras are considered. Transposed twists and left-right dual algebras are presented. Finally, some physical applications are discussed.
Pseudo-Linear Attitude Determination of Spinning Spacecraft
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2004-01-01
This paper presents the overall mathematical model and results from pseudo linear recursive estimators of attitude and rate for a spinning spacecraft. The measurements considered are vector measurements obtained by sun-sensors, fixed head star trackers, horizon sensors, and three axis magnetometers. Two filters are proposed for estimating the attitude as well as the angular rate vector. One filter, called the q-Filter, yields the attitude estimate as a quaternion estimate, and the other filter, called the D-Filter, yields the estimated direction cosine matrix. Because the spacecraft is gyro-less, Euler s equation of angular motion of rigid bodies is used to enable the estimation of the angular velocity. A simpler Markov model is suggested as a replacement for Euler's equation in the case where the vector measurements are obtained at high rates relative to the spacecraft angular rate. The performance of the two filters is examined using simulated data.
Evaluation of linear induction motor characteristics : the Yamamura model
DOT National Transportation Integrated Search
1975-04-30
The Yamamura theory of the double-sided linear induction motor (LIM) excited by a constant current source is discussed in some detail. The report begins with a derivation of thrust and airgap power using the method of vector potentials and theorem of...
Recursive inversion of externally defined linear systems
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.; Baram, Yoram
1988-01-01
The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problems of system identification and compensation.
Mathematical Methods for Optical Physics and Engineering
NASA Astrophysics Data System (ADS)
Gbur, Gregory J.
2011-01-01
1. Vector algebra; 2. Vector calculus; 3. Vector calculus in curvilinear coordinate systems; 4. Matrices and linear algebra; 5. Advanced matrix techniques and tensors; 6. Distributions; 7. Infinite series; 8. Fourier series; 9. Complex analysis; 10. Advanced complex analysis; 11. Fourier transforms; 12. Other integral transforms; 13. Discrete transforms; 14. Ordinary differential equations; 15. Partial differential equations; 16. Bessel functions; 17. Legendre functions and spherical harmonics; 18. Orthogonal functions; 19. Green's functions; 20. The calculus of variations; 21. Asymptotic techniques; Appendices; References; Index.
2012-05-10
this angle depends linearly on time, α = 2πf t, where f is the frequency of the rotating magnetic field. We assume that the magnetization vector M is... vector B (Figure 1). In order to derive an equation governing the nanorod rotation, it is convenient to count its revolutions with respect to the fixed... vector directed perpendicularly to the plane of the nanorod rotation.27,28 Substituting the definition of angle φ(t) through the angles α(t) and θ(t
Zhou, Dezhong; Cutlar, Lara; Gao, Yongsheng; Wang, Wei; O’Keeffe-Ahern, Jonathan; McMahon, Sean; Duarte, Blanca; Larcher, Fernando; Rodriguez, Brian J.; Greiser, Udo; Wang, Wenxin
2016-01-01
Nonviral gene therapy holds great promise but has not delivered treatments for clinical application to date. Lack of safe and efficient gene delivery vectors is the major hurdle. Among nonviral gene delivery vectors, poly(β-amino ester)s are one of the most versatile candidates because of their wide monomer availability, high polymer flexibility, and superior gene transfection performance both in vitro and in vivo. However, to date, all research has been focused on vectors with a linear structure. A well-accepted view is that dendritic or branched polymers have greater potential as gene delivery vectors because of their three-dimensional structure and multiple terminal groups. Nevertheless, to date, the synthesis of dendritic or branched polymers has been proven to be a well-known challenge. We report the design and synthesis of highly branched poly(β-amino ester)s (HPAEs) via a one-pot “A2 + B3 + C2”–type Michael addition approach and evaluate their potential as gene delivery vectors. We find that the branched structure can significantly enhance the transfection efficiency of poly(β-amino ester)s: Up to an 8521-fold enhancement in transfection efficiency was observed across 12 cell types ranging from cell lines, primary cells, to stem cells, over their corresponding linear poly(β-amino ester)s (LPAEs) and the commercial transfection reagents polyethyleneimine, SuperFect, and Lipofectamine 2000. Moreover, we further demonstrate that HPAEs can correct genetic defects in vivo using a recessive dystrophic epidermolysis bullosa graft mouse model. Our findings prove that the A2 + B3 + C2 approach is highly generalizable and flexible for the design and synthesis of HPAEs, which cannot be achieved by the conventional polymerization approach; HPAEs are more efficient vectors in gene transfection than the corresponding LPAEs. This provides valuable insight into the development and applications of nonviral gene delivery and demonstrates great prospect for their translation to a clinical environment. PMID:27386572
Current harmonics elimination control method for six-phase PM synchronous motor drives.
Yuan, Lei; Chen, Ming-liang; Shen, Jian-qing; Xiao, Fei
2015-11-01
To reduce the undesired 5th and 7th stator harmonic current in the six-phase permanent magnet synchronous motor (PMSM), an improved vector control algorithm was proposed based on vector space decomposition (VSD) transformation method, which can control the fundamental and harmonic subspace separately. To improve the traditional VSD technology, a novel synchronous rotating coordinate transformation matrix was presented in this paper, and only using the traditional PI controller in d-q subspace can meet the non-static difference adjustment, the controller parameter design method is given by employing internal model principle. Moreover, the current PI controller parallel with resonant controller is employed in x-y subspace to realize the specific 5th and 7th harmonic component compensation. In addition, a new six-phase SVPWM algorithm based on VSD transformation theory is also proposed. Simulation and experimental results verify the effectiveness of current decoupling vector controller. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Autonomous Reconfigurable Control Allocation (ARCA) for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Hodel, A. S.; Callahan, Ronnie; Jackson, Scott (Technical Monitor)
2002-01-01
The role of control allocation (CA) in modern aerospace vehicles is to compute a command vector delta(sub c) is a member of IR(sup n(sub a)) that corresponding to commanded or desired body-frame torques (moments) tou(sub c) = [L M N](sup T) to the vehicle, compensating for and/or responding to inaccuracies in off-line nominal control allocation calculations, actuator failures and/or degradations (reduced effectiveness), or actuator limitations (rate/position saturation). The command vector delta(sub c) may govern the behavior of, e.g., acrosurfaces, reaction thrusters, engine gimbals and/or thrust vectoring. Typically, the individual moments generated in response to each of the n(sub a) commands does not lie strictly in the roll, pitch, or yaw axes, and so a common practice is to group or gang actuators so that a one-to-one mapping from torque commands tau(sub c) actuator commands delta(sub c) may be achieved in an off-line computed CA function.
NASA Technical Reports Server (NTRS)
Johnson, P. R.; Bardusch, R. E.
1974-01-01
A hydraulic control loading system for aircraft simulation was analyzed to find the causes of undesirable low frequency oscillations and loading effects in the output. The hypothesis of mechanical compliance in the control linkage was substantiated by comparing the behavior of a mathematical model of the system with previously obtained experimental data. A compensation scheme based on the minimum integral of the squared difference between desired and actual output was shown to be effective in reducing the undesirable output effects. The structure of the proposed compensation was computed by use of a dynamic programing algorithm and a linear state space model of the fixed elements in the system.
Motion compensation and noise tolerance in phase-shifting digital in-line holography.
Stenner, Michael D; Neifeld, Mark A
2006-05-15
We present a technique for phase-shifting digital in-line holography which compensates for lateral object motion. By collecting two frames of interference between object and reference fields with identical reference phase, one can estimate the lateral motion that occurred between frames using the cross-correlation. We also describe a very general linear framework for phase-shifting holographic reconstruction which minimizes additive white Gaussian noise (AWGN) for an arbitrary set of reference field amplitudes and phases. We analyze the technique's sensitivity to noise (AWGN, quantization, and shot), errors in the reference fields, errors in motion estimation, resolution, and depth of field. We also present experimental motion-compensated images achieving the expected resolution.
Temperature compensated liquid level sensor using FBGs and a Bourdon tube
NASA Astrophysics Data System (ADS)
Sengupta, D.; Shankar, M. Sai; Rao, P. Vengal; Reddy, P. Saidi; Sai Prasad, R. L. N.; Kishore, P.; Srimannarayana, K.
2011-12-01
A temperature compensated liquid level sensor using FBGs and a bourdon tube that works on hydrostatic pressure is presented. An FBG (FBG1) is fixed between free end and a fixed end of the bourdon tube. When hydrostatic pressure applied to the bourdon tube FBG1 experience an axial strain due to the movement of free end. Experimental result shows, a good linearity in shift in Bragg wavelength with the applied pressure. The performance of this arrangement is tested for 21metre water column pressure. Another FBG (FBG2) is included for temperature compensation. The design of the sensor head is simple and easy mountable external to any tank for liquid level measurements.
Spiking Neural P Systems With Rules on Synapses Working in Maximum Spiking Strategy.
Tao Song; Linqiang Pan
2015-06-01
Spiking neural P systems (called SN P systems for short) are a class of parallel and distributed neural-like computation models inspired by the way the neurons process information and communicate with each other by means of impulses or spikes. In this work, we introduce a new variant of SN P systems, called SN P systems with rules on synapses working in maximum spiking strategy, and investigate the computation power of the systems as both number and vector generators. Specifically, we prove that i) if no limit is imposed on the number of spikes in any neuron during any computation, such systems can generate the sets of Turing computable natural numbers and the sets of vectors of positive integers computed by k-output register machine; ii) if an upper bound is imposed on the number of spikes in each neuron during any computation, such systems can characterize semi-linear sets of natural numbers as number generating devices; as vector generating devices, such systems can only characterize the family of sets of vectors computed by sequential monotonic counter machine, which is strictly included in family of semi-linear sets of vectors. This gives a positive answer to the problem formulated in Song et al., Theor. Comput. Sci., vol. 529, pp. 82-95, 2014.
On differential operators generating iterative systems of linear ODEs of maximal symmetry algebra
NASA Astrophysics Data System (ADS)
Ndogmo, J. C.
2017-06-01
Although every iterative scalar linear ordinary differential equation is of maximal symmetry algebra, the situation is different and far more complex for systems of linear ordinary differential equations, and an iterative system of linear equations need not be of maximal symmetry algebra. We illustrate these facts by examples and derive families of vector differential operators whose iterations are all linear systems of equations of maximal symmetry algebra. Some consequences of these results are also discussed.
NASA Astrophysics Data System (ADS)
Tiwari, Vivek; Peters, William K.; Jonas, David M.
2017-10-01
Non-adiabatic vibrational-electronic resonance in the excited electronic states of natural photosynthetic antennas drastically alters the adiabatic framework, in which electronic energy transfer has been conventionally studied, and suggests the possibility of exploiting non-adiabatic dynamics for directed energy transfer. Here, a generalized dimer model incorporates asymmetries between pigments, coupling to the environment, and the doubly excited state relevant for nonlinear spectroscopy. For this generalized dimer model, the vibrational tuning vector that drives energy transfer is derived and connected to decoherence between singly excited states. A correlation vector is connected to decoherence between the ground state and the doubly excited state. Optical decoherence between the ground and singly excited states involves linear combinations of the correlation and tuning vectors. Excitonic coupling modifies the tuning vector. The correlation and tuning vectors are not always orthogonal, and both can be asymmetric under pigment exchange, which affects energy transfer. For equal pigment vibrational frequencies, the nonadiabatic tuning vector becomes an anti-correlated delocalized linear combination of intramolecular vibrations of the two pigments, and the nonadiabatic energy transfer dynamics become separable. With exchange symmetry, the correlation and tuning vectors become delocalized intramolecular vibrations that are symmetric and antisymmetric under pigment exchange. Diabatic criteria for vibrational-excitonic resonance demonstrate that anti-correlated vibrations increase the range and speed of vibronically resonant energy transfer (the Golden Rule rate is a factor of 2 faster). A partial trace analysis shows that vibronic decoherence for a vibrational-excitonic resonance between two excitons is slower than their purely excitonic decoherence.
Tiwari, Vivek; Peters, William K; Jonas, David M
2017-10-21
Non-adiabatic vibrational-electronic resonance in the excited electronic states of natural photosynthetic antennas drastically alters the adiabatic framework, in which electronic energy transfer has been conventionally studied, and suggests the possibility of exploiting non-adiabatic dynamics for directed energy transfer. Here, a generalized dimer model incorporates asymmetries between pigments, coupling to the environment, and the doubly excited state relevant for nonlinear spectroscopy. For this generalized dimer model, the vibrational tuning vector that drives energy transfer is derived and connected to decoherence between singly excited states. A correlation vector is connected to decoherence between the ground state and the doubly excited state. Optical decoherence between the ground and singly excited states involves linear combinations of the correlation and tuning vectors. Excitonic coupling modifies the tuning vector. The correlation and tuning vectors are not always orthogonal, and both can be asymmetric under pigment exchange, which affects energy transfer. For equal pigment vibrational frequencies, the nonadiabatic tuning vector becomes an anti-correlated delocalized linear combination of intramolecular vibrations of the two pigments, and the nonadiabatic energy transfer dynamics become separable. With exchange symmetry, the correlation and tuning vectors become delocalized intramolecular vibrations that are symmetric and antisymmetric under pigment exchange. Diabatic criteria for vibrational-excitonic resonance demonstrate that anti-correlated vibrations increase the range and speed of vibronically resonant energy transfer (the Golden Rule rate is a factor of 2 faster). A partial trace analysis shows that vibronic decoherence for a vibrational-excitonic resonance between two excitons is slower than their purely excitonic decoherence.
2013-05-01
95.2 dBc/Hz, (c) - 94.2 dBc/Hz. Fig. 4: Mechanically compensated AlN resonators. A thin oxide layer is used to completely cancel the linear...pumped is represented by a non-linear capacitor. This capacitor will be first implemented via a varactor and then substituted by a purely mechanical...demonstrate the advantages of a parametric oscillator: (i) we will first use an external electronic varactor to prove that a parametric oscillator
High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization
NASA Astrophysics Data System (ADS)
Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan
2017-04-01
Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.
Electromagnetic Monitoring and Control of a Plurality of Nanosatellites
NASA Technical Reports Server (NTRS)
Soloway, Donald I. (Inventor)
2017-01-01
A method for monitoring position of and controlling a second nanosatellite (NS) relative to a position of a first NS. Each of the first and second NSs has a rectangular or cubical configuration of independently activatable, current-carrying solenoids, each solenoid having an independent magnetic dipole moment vector, .mu.1 and .mu.2. A vector force F and a vector torque are expressed as linear or bilinear combinations of the first set and second set of magnetic moments, and a distance vector extending between the first and second NSs is estimated. Control equations are applied to estimate vectors, .mu.1 and .mu.2, required to move the NSs toward a desired NS configuration. This extends to control of N nanosatellites.
Detection of ferromagnetic target based on mobile magnetic gradient tensor system
NASA Astrophysics Data System (ADS)
Gang, Y. I. N.; Yingtang, Zhang; Zhining, Li; Hongbo, Fan; Guoquan, Ren
2016-03-01
Attitude change of mobile magnetic gradient tensor system critically affects the precision of gradient measurements, thereby increasing ambiguity in target detection. This paper presents a rotational invariant-based method for locating and identifying ferromagnetic targets. Firstly, unit magnetic moment vector was derived based on the geometrical invariant, such that the intermediate eigenvector of the magnetic gradient tensor is perpendicular to the magnetic moment vector and the source-sensor displacement vector. Secondly, unit source-sensor displacement vector was derived based on the characteristic that the angle between magnetic moment vector and source-sensor displacement is a rotational invariant. By introducing a displacement vector between two measurement points, the magnetic moment vector and the source-sensor displacement vector were theoretically derived. To resolve the problem of measurement noises existing in the realistic detection applications, linear equations were formulated using invariants corresponding to several distinct measurement points and least square solution of magnetic moment vector and source-sensor displacement vector were obtained. Results of simulation and principal verification experiment showed the correctness of the analytical method, along with the practicability of the least square method.
NASA Technical Reports Server (NTRS)
Sankaran, V.
1974-01-01
An iterative procedure for determining the constant gain matrix that will stabilize a linear constant multivariable system using output feedback is described. The use of this procedure avoids the transformation of variables which is required in other procedures. For the case in which the product of the output and input vector dimensions is greater than the number of states of the plant, general solution is given. In the case in which the states exceed the product of input and output vector dimensions, a least square solution which may not be stable in all cases is presented. The results are illustrated with examples.
DOA Finding with Support Vector Regression Based Forward-Backward Linear Prediction.
Pan, Jingjing; Wang, Yide; Le Bastard, Cédric; Wang, Tianzhen
2017-05-27
Direction-of-arrival (DOA) estimation has drawn considerable attention in array signal processing, particularly with coherent signals and a limited number of snapshots. Forward-backward linear prediction (FBLP) is able to directly deal with coherent signals. Support vector regression (SVR) is robust with small samples. This paper proposes the combination of the advantages of FBLP and SVR in the estimation of DOAs of coherent incoming signals with low snapshots. The performance of the proposed method is validated with numerical simulations in coherent scenarios, in terms of different angle separations, numbers of snapshots, and signal-to-noise ratios (SNRs). Simulation results show the effectiveness of the proposed method.
Applications of Support Vector Machines In Chemo And Bioinformatics
NASA Astrophysics Data System (ADS)
Jayaraman, V. K.; Sundararajan, V.
2010-10-01
Conventional linear & nonlinear tools for classification, regression & data driven modeling are being replaced on a rapid scale by newer techniques & tools based on artificial intelligence and machine learning. While the linear techniques are not applicable for inherently nonlinear problems, newer methods serve as attractive alternatives for solving real life problems. Support Vector Machine (SVM) classifiers are a set of universal feed-forward network based classification algorithms that have been formulated from statistical learning theory and structural risk minimization principle. SVM regression closely follows the classification methodology. In this work recent applications of SVM in Chemo & Bioinformatics will be described with suitable illustrative examples.
Linear time relational prototype based learning.
Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara
2012-10-01
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
Reduced state feedback gain computation. [optimization and control theory for aircraft control
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
Because application of conventional optimal linear regulator theory to flight controller design requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. Therefore, a stochastic linear model that was developed is presented which accounts for aircraft parameter and initial uncertainty, measurement noise, turbulence, pilot command and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps
NASA Technical Reports Server (NTRS)
Gerson, Ira A.; Jasiuk, Mark A.
1990-01-01
Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.
Vectorial mask optimization methods for robust optical lithography
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.
2012-10-01
Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.
Velez, Mariel M.; Wernet, Mathias F.; Clark, Damon A.
2014-01-01
Understanding the mechanisms that link sensory stimuli to animal behavior is a central challenge in neuroscience. The quantitative description of behavioral responses to defined stimuli has led to a rich understanding of different behavioral strategies in many species. One important navigational cue perceived by many vertebrates and insects is the e-vector orientation of linearly polarized light. Drosophila manifests an innate orientation response to this cue (‘polarotaxis’), aligning its body axis with the e-vector field. We have established a population-based behavioral paradigm for the genetic dissection of neural circuits guiding polarotaxis to both celestial as well as reflected polarized stimuli. However, the behavioral mechanisms by which flies align with a linearly polarized stimulus remain unknown. Here, we present a detailed quantitative description of Drosophila polarotaxis, systematically measuring behavioral parameters that are modulated by the stimulus. We show that angular acceleration is modulated during alignment, and this single parameter may be sufficient for alignment. Furthermore, using monocular deprivation, we show that each eye is necessary for modulating turns in the ipsilateral direction. This analysis lays the foundation for understanding how neural circuits guide these important visual behaviors. PMID:24810784
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
NASA Astrophysics Data System (ADS)
Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.
2015-06-01
Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.
Segmentation of discrete vector fields.
Li, Hongyu; Chen, Wenbin; Shen, I-Fan
2006-01-01
In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.
System design of the annular suspension and pointing system /ASPS/
NASA Technical Reports Server (NTRS)
Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.
1978-01-01
This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.
Power and spectrally efficient M-ARY QAM schemes for future mobile satellite communications
NASA Technical Reports Server (NTRS)
Sreenath, K.; Feher, K.
1990-01-01
An effective method to compensate nonlinear phase distortion caused by the mobile amplifier is proposed. As a first step towards the future use of spectrally efficient modulation schemes for mobile satellite applications, we have investigated effects of nonlinearities and the phase compensation method on 16-QAM. The new method provides about 2 dB savings in power for 16-QAM operation with cost effective amplifiers near saturation and thereby promising use of spectrally efficient linear modulation schemes for future mobile satellite applications.
Error compensation for thermally induced errors on a machine tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krulewich, D.A.
1996-11-08
Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.
Optical/Infrared Signatures for Space-Based Remote Sensing
2007-11-01
Vanderbilt et al., 1985a, 1985b]. So, first linear polarization was introduced, followed by progress toward a full vector theory of polarization ...radiance profiles taken 30 s apart in a view direction orthogonal to the velocity vector , showing considerable structure due to radiance layers in the...6 Figure 3. The northern polar region and locations of the MSX
Parallel-vector solution of large-scale structural analysis problems on supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1989-01-01
A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.
ERIC Educational Resources Information Center
Farag, Mark
2007-01-01
Hill ciphers are linear codes that use as input a "plaintext" vector [p-right arrow above] of size n, which is encrypted with an invertible n x n matrix E to produce a "ciphertext" vector [c-right arrow above] = E [middle dot] [p-right arrow above]. Informally, a near-field is a triple [left angle bracket]N; +, *[right angle bracket] that…
An Elementary Treatment of General Inner Products
ERIC Educational Resources Information Center
Graver, Jack E.
2011-01-01
A typical first course on linear algebra is usually restricted to vector spaces over the real numbers and the usual positive-definite inner product. Hence, the proof that dim(S)+ dim(S[perpendicular]) = dim("V") is not presented in a way that is generalizable to non-positive?definite inner products or to vector spaces over other fields. In this…
Definition of Contravariant Velocity Components
NASA Technical Reports Server (NTRS)
Hung, Ching-Mao; Kwak, Dochan (Technical Monitor)
2002-01-01
This is an old issue in computational fluid dynamics (CFD). What is the so-called contravariant velocity or contravariant velocity component? In the article, we review the basics of tensor analysis and give the contravariant velocity component a rigorous explanation. For a given coordinate system, there exist two uniquely determined sets of base vector systems - one is the covariant and another is the contravariant base vector system. The two base vector systems are reciprocal. The so-called contravariant velocity component is really the contravariant component of a velocity vector for a time-independent coordinate system, or the contravariant component of a relative velocity between fluid and coordinates, for a time-dependent coordinate system. The contravariant velocity components are not physical quantities of the velocity vector. Their magnitudes, dimensions, and associated directions are controlled by their corresponding covariant base vectors. Several 2-D (two-dimensional) linear examples and 2-D mass-conservation equation are used to illustrate the details of expressing a vector with respect to the covariant and contravariant base vector systems, respectively.
Warps, grids and curvature in triple vector bundles
NASA Astrophysics Data System (ADS)
Flari, Magdalini K.; Mackenzie, Kirill
2018-06-01
A triple vector bundle is a cube of vector bundle structures which commute in the (strict) categorical sense. A grid in a triple vector bundle is a collection of sections of each bundle structure with certain linearity properties. A grid provides two routes around each face of the triple vector bundle, and six routes from the base manifold to the total manifold; the warps measure the lack of commutativity of these routes. In this paper we first prove that the sum of the warps in a triple vector bundle is zero. The proof we give is intrinsic and, we believe, clearer than the proof using decompositions given earlier by one of us. We apply this result to the triple tangent bundle T^3M of a manifold and deduce (as earlier) the Jacobi identity. We further apply the result to the triple vector bundle T^2A for a vector bundle A using a connection in A to define a grid in T^2A . In this case the curvature emerges from the warp theorem.
Temperature Effects and Compensation-Control Methods
Xia, Dunzhu; Chen, Shuling; Wang, Shourong; Li, Hongsheng
2009-01-01
In the analysis of the effects of temperature on the performance of microgyroscopes, it is found that the resonant frequency of the microgyroscope decreases linearly as the temperature increases, and the quality factor changes drastically at low temperatures. Moreover, the zero bias changes greatly with temperature variations. To reduce the temperature effects on the microgyroscope, temperature compensation-control methods are proposed. In the first place, a BP (Back Propagation) neural network and polynomial fitting are utilized for building the temperature model of the microgyroscope. Considering the simplicity and real-time requirements, piecewise polynomial fitting is applied in the temperature compensation system. Then, an integral-separated PID (Proportion Integration Differentiation) control algorithm is adopted in the temperature control system, which can stabilize the temperature inside the microgyrocope in pursuing its optimal performance. Experimental results reveal that the combination of microgyroscope temperature compensation and control methods is both realizable and effective in a miniaturized microgyroscope prototype. PMID:22408509
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozlovski, V. V.; Lebedev, A. A.; Bogdanova, E. V.
The model of conductivity compensation in SiC under irradiation with high-energy electrons is presented. The following processes are considered to cause a decrease in the free carrier concentration: (i) formation of deep traps by intrinsic point defects, Frenkel pairs produced by irradiation; (ii) 'deactivation' of the dopant via formation of neutral complexes including a dopant atom and a radiation-induced point defect; and (iii) formation of deep compensating traps via generation of charged complexes constituted by a dopant atom and a radiation-induced point defect. To determine the compensation mechanism, dose dependences of the deep compensation of moderately doped SiC (CVD) undermore » electron irradiation have been experimentally studied. It is demonstrated that, in contrast to n-FZ-Si, moderately doped SiC (CVD) exhibits linear dependences (with a strongly nonlinear dependence observed for Si). Therefore, the conductivity compensation in silicon carbide under electron irradiation occurs due to deep traps formed by primary radiation defects (vacancies and interstitial atoms) in the silicon and carbon sublattices. It is known that the compensation in silicon is due to the formation of secondary radiation defects that include a dopant atom. It is shown that, in contrast to n-SiC (CVD), primary defects in only the carbon sublattice of moderately doped p-SiC (CVD) cannot account for the compensation process. In p-SiC, either primary defects in the silicon sublattice or defects in both sublattices are responsible for the conductivity compensation.« less
A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Goldberg, Hirsh; Nasrabadi, Nasser M.
2007-04-01
In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.
Multi-color incomplete Cholesky conjugate gradient methods for vector computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poole, E.L.
1986-01-01
This research is concerned with the solution on vector computers of linear systems of equations. Ax = b, where A is a large, sparse symmetric positive definite matrix with non-zero elements lying only along a few diagonals of the matrix. The system is solved using the incomplete Cholesky conjugate gradient method (ICCG). Multi-color orderings are used of the unknowns in the linear system to obtain p-color matrices for which a no-fill block ICCG method is implemented on the CYBER 205 with O(N/p) length vector operations in both the decomposition of A and, more importantly, in the forward and back solvesmore » necessary at each iteration of the method. (N is the number of unknowns and p is a small constant). A p-colored matrix is a matrix that can be partitioned into a p x p block matrix where the diagonal blocks are diagonal matrices. The matrix is stored by diagonals and matrix multiplication by diagonals is used to carry out the decomposition of A and the forward and back solves. Additionally, if the vectors across adjacent blocks line up, then some of the overhead associated with vector startups can be eliminated in the matrix vector multiplication necessary at each conjugate gradient iteration. Necessary and sufficient conditions are given to determine which multi-color orderings of the unknowns correspond to p-color matrices, and a process is indicated for choosing multi-color orderings.« less
NASA Astrophysics Data System (ADS)
Bu, Xiangwei; Wu, Xiaoyan; He, Guangjun; Huang, Jiaqi
2016-03-01
This paper investigates the design of a novel adaptive neural controller for the longitudinal dynamics of a flexible air-breathing hypersonic vehicle with control input constraints. To reduce the complexity of controller design, the vehicle dynamics is decomposed into the velocity subsystem and the altitude subsystem, respectively. For each subsystem, only one neural network is utilized to approach the lumped unknown function. By employing a minimal-learning parameter method to estimate the norm of ideal weight vectors rather than their elements, there are only two adaptive parameters required for neural approximation. Thus, the computational burden is lower than the ones derived from neural back-stepping schemes. Specially, to deal with the control input constraints, additional systems are exploited to compensate the actuators. Lyapunov synthesis proves that all the closed-loop signals involved are uniformly ultimately bounded. Finally, simulation results show that the adopted compensation scheme can tackle actuator constraint effectively and moreover velocity and altitude can stably track their reference trajectories even when the physical limitations on control inputs are in effect.
Liang, Yunlei; Du, Zhijiang; Sun, Lining
2017-01-01
The tendon driven mechanism using a cable and pulley to transmit power is adopted by many surgical robots. However, backlash hysteresis objectively exists in cable-pulley mechanisms, and this nonlinear problem is a great challenge in precise position control during the surgical procedure. Previous studies mainly focused on the transmission characteristics of the cable-driven system and constructed transmission models under particular assumptions to solve nonlinear problems. However, these approaches are limited because the modeling process is complex and the transmission models lack general applicability. This paper presents a novel position compensation control scheme to reduce the impact of backlash hysteresis on the positioning accuracy of surgical robots’ end-effectors. In this paper, a position compensation scheme using a support vector machine based on feedforward control is presented to reduce the position tracking error. To validate the proposed approach, experimental validations are conducted on our cable-pulley system and comparative experiments are carried out. The results show remarkable improvements in the performance of reducing the positioning error for the use of the proposed scheme. PMID:28974011
Manga Vectorization and Manipulation with Procedural Simple Screentone.
Yao, Chih-Yuan; Hung, Shih-Hsuan; Li, Guo-Wei; Chen, I-Yu; Adhitya, Reza; Lai, Yu-Chi
2017-02-01
Manga are a popular artistic form around the world, and artists use simple line drawing and screentone to create all kinds of interesting productions. Vectorization is helpful to digitally reproduce these elements for proper content and intention delivery on electronic devices. Therefore, this study aims at transforming scanned Manga to a vector representation for interactive manipulation and real-time rendering with arbitrary resolution. Our system first decomposes the patch into rough Manga elements including possible borders and shading regions using adaptive binarization and screentone detector. We classify detected screentone into simple and complex patterns: our system extracts simple screentone properties for refining screentone borders, estimating lighting, compensating missing strokes inside screentone regions, and later resolution independently rendering with our procedural shaders. Our system treats the others as complex screentone areas and vectorizes them with our proposed line tracer which aims at locating boundaries of all shading regions and polishing all shading borders with the curve-based Gaussian refiner. A user can lay down simple scribbles to cluster Manga elements intuitively for the formation of semantic components, and our system vectorizes these components into shading meshes along with embedded Bézier curves as a unified foundation for consistent manipulation including pattern manipulation, deformation, and lighting addition. Our system can real-time and resolution independently render the shading regions with our procedural shaders and drawing borders with the curve-based shader. For Manga manipulation, the proposed vector representation can be not only magnified without artifacts but also deformed easily to generate interesting results.
Coherent detection and digital signal processing for fiber optic communications
NASA Astrophysics Data System (ADS)
Ip, Ezra
The drive towards higher spectral efficiency in optical fiber systems has generated renewed interest in coherent detection. We review different detection methods, including noncoherent, differentially coherent, and coherent detection, as well as hybrid detection methods. We compare the modulation methods that are enabled and their respective performances in a linear regime. An important system parameter is the number of degrees of freedom (DOF) utilized in transmission. Polarization-multiplexed quadrature-amplitude modulation maximizes spectral efficiency and power efficiency as it uses all four available DOF contained in the two field quadratures in the two polarizations. Dual-polarization homodyne or heterodyne downconversion are linear processes that can fully recover the received signal field in these four DOF. When downconverted signals are sampled at the Nyquist rate, compensation of transmission impairments can be performed using digital signal processing (DSP). Software based receivers benefit from the robustness of DSP, flexibility in design, and ease of adaptation to time-varying channels. Linear impairments, including chromatic dispersion (CD) and polarization-mode dispersion (PMD), can be compensated quasi-exactly using finite impulse response filters. In practical systems, sampling the received signal at 3/2 times the symbol rate is sufficient to enable an arbitrary amount of CD and PMD to be compensated for a sufficiently long equalizer whose tap length scales linearly with transmission distance. Depending on the transmitted constellation and the target bit error rate, the analog-to-digital converter (ADC) should have around 5 to 6 bits of resolution. Digital coherent receivers are naturally suited for the implementation of feedforward carrier recovery, which has superior linewidth tolerance than phase-locked loops, and does not suffer from feedback delay constraints. Differential bit encoding can be used to prevent catastrophic receiver failure due to cycle slips. In systems where nonlinear effects are concentrated mostly at fiber locations with small accumulated dispersion, nonlinear phase de-rotation is a low-complexity algorithm that can partially mitigate nonlinear effects. For systems with arbitrary dispersion maps, however, backpropagation is the only universal technique that can jointly compensate dispersion and fiber nonlinearity. Backpropagation requires solving the nonlinear Schrodinger equation at the receiver, and has high computational cost. Backpropagation is most effective when dispersion compensation fibers are removed, and when signal processing is performed at three times oversampling. Backpropagation can improve system performance and increase transmission distance. With anticipated advances in analog-to-digital converters and integrated circuit technology, DSP-based coherent receivers at bit rates up to 100 Gb/s should become practical in the near future.
Compensation effect during the pyrolysis of tyres and bamboo.
Mui, Edward L K; Cheung, W H; Lee, Vinci K C; McKay, Gordon
2010-05-01
Pyrolysis parameters (e.g. pre-exponential factor A, and activation energy E) of two waste materials, namely, tyre rubber and bamboo scaffolding, based on the Arrhenius equation were obtained from weight loss data via thermogravimetry at different heating rates. The compensation effect, which suggests that the linear variation in the pre-exponential factor and the activation energy, was observed for these materials. This can be attributed to the variety of active sites over the reactant surface in the course of decomposition. The calculated data from several revised, first-order models were compared with similar models in the literature. It has been shown that both literature and our calculated data exhibit high linearity in terms of lnA and E, revealing that the latter agree well with other researchers' work. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Qin, Xi; Cao, Jihong; Chen, Yong; Zhang, Feng; Jian, Shuisheng
2007-08-01
An analytical expression was proposed to analyze the influence of group-delay ripple (GDR) on timing jitter induced by self-phase modulation (SPM) and intra-channel cross-phase modulation (IXPM) in pseudo-linear transmission systems when dispersion was compensated by chirped fiber Bragg grating (CFBG). Effects of ripple amplitude, period, and phase on timing jitter were discussed by theoretical and numerical analysis in detail. The results show that the influence of GDR on timing jitter changes linearly with the amplitude of GDR and whether it decreases or increases the timing jitter relies on the ripple period and ripple phase. Timing jitter induced by SPM and IXPM could be suppressed totally by adjusting the relative phase between the center frequency of the pulse and the ripples.
A dual estimate method for aeromagnetic compensation
NASA Astrophysics Data System (ADS)
Ma, Ming; Zhou, Zhijian; Cheng, Defu
2017-11-01
Scalar aeromagnetic surveys have played a vital role in prospecting. However, before analysis of the surveys’ aeromagnetic data is possible, the aircraft’s magnetic interference should be removed. The extensively adopted linear model for aeromagnetic compensation is computationally efficient but faces an underfitting problem. On the other hand, the neural model proposed by Williams is more powerful at fitting but always suffers from an overfitting problem. This paper starts off with an analysis of these two models and then proposes a dual estimate method to combine them together to improve accuracy. This method is based on an unscented Kalman filter, but a gradient descent method is implemented over the iteration so that the parameters of the linear model are adjustable during flight. The noise caused by the neural model’s overfitting problem is suppressed by introducing an observation noise.
Driever, Steven M; Baker, Neil R
2011-05-01
Electron flux from water via photosystem II (PSII) and PSI to oxygen (water-water cycle) may provide a mechanism for dissipation of excess excitation energy in leaves when CO(2) assimilation is restricted. Mass spectrometry was used to measure O(2) uptake and evolution together with CO(2) uptake in leaves of French bean and maize at CO(2) concentrations saturating for photosynthesis and the CO(2) compensation point. In French bean at high CO(2) and low O(2) concentrations no significant water-water cycle activity was observed. At the CO(2) compensation point and 3% O(2) a low rate of water-water cycle activity was observed, which accounted for 30% of the linear electron flux from water. In maize leaves negligible water-water cycle activity was detected at the compensation point. During induction of photosynthesis in maize linear electron flux was considerably greater than CO(2) assimilation, but no significant water-water cycle activity was detected. Miscanthus × giganteus grown at chilling temperature also exhibited rates of linear electron transport considerably in excess of CO(2) assimilation; however, no significant water-water cycle activity was detected. Clearly the water-water cycle can operate in leaves under some conditions, but it does not act as a major sink for excess excitation energy when CO(2) assimilation is restricted. © 2011 Blackwell Publishing Ltd.
Recursive inversion of externally defined linear systems by FIR filters
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.; Baram, Yoram
1989-01-01
The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least-squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problem of system identification and compensation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozlovski, V. V.; Lebedev, A. A., E-mail: shura.lebe@mail.ioffe.ru; Bogdanova, E. V.
The compensation of moderately doped p-4H-SiC samples grown by the chemical vapor deposition (CVD) method under irradiation with 0.9-MeV electrons and 15-MeV protons is studied. The experimentally measured carrier removal rates are 1.2–1.6 cm{sup –1} for electrons and 240–260 cm{sup –1} for protons. The dependence of the concentration of uncompensated acceptors and donors, measured in the study, demonstrates a linear decrease with increasing irradiation dose to the point of complete compensation. This run of the dependence shows that compensation of the samples is due to the transition of carriers to deep centers formed by primary radiation-induced defects. It is demonstratedmore » that, in contrast to n-SiC (CVD), primary defects in the carbon sublattice of moderately doped p-SiC (CVD) only cannot account for the compensation process. In p-SiC, either primary defects in the silicon sublattice, or defects in both sublattices are responsible for conductivity compensation. Also, photoluminescence spectra are examined in relation to the irradiation dose.« less
NASA Technical Reports Server (NTRS)
Birkhimer, Craig; Newman, Wyatt; Choi, Benjamin; Lawrence, Charles
1994-01-01
Increasing research is being done into industrial uses for the microgravity environment aboard orbiting space vehicles. However, there is some concern over the effects of reaction forces produced by moving objects, especially motors, robotic actuators, and astronauts. Reaction forces produced by the movement of these objects may manifest themselves as undesirable accelerations in the space vehicle making the vehicle unusable for microgravity applications. It is desirable to provide compensation for such forces using active means. This paper presents the design and experimental evaluation of the NASA three degree of freedom reaction compensation platform, a system designed to be a testbed for the feasibility of active attenuation of reaction forces caused by moving objects in a microgravity environment. Unique 'linear motors,' which convert electrical current directly into rectilinear force, are used in the platform design. The linear motors induce accelerations of the displacer inertias. These accelerations create reaction forces that may be controlled to counteract disturbance forces introduced to the platform. The stated project goal is to reduce reaction forces by 90 percent, or -20 dB. Description of the system hardware, characterization of the actuators and the composite system, and design of the software safety system and control software are included.
Temperature Compensation Fiber Bragg Grating Pressure Sensor Based on Plane Diaphragm
NASA Astrophysics Data System (ADS)
Liang, Minfu; Fang, Xinqiu; Ning, Yaosheng
2018-06-01
Pressure sensors are the essential equipments in the field of pressure measurement. In this work, we propose a temperature compensation fiber Bragg grating (FBG) pressure sensor based on the plane diaphragm. The plane diaphragm and pressure sensitivity FBG (PS FBG) are used as the pressure sensitive components, and the temperature compensation FBG (TC FBG) is used to improve the temperature cross-sensitivity. Mechanical deformation model and deformation characteristics simulation analysis of the diaphragm are presented. The measurement principle and theoretical analysis of the mathematical relationship between the FBG central wavelength shift and pressure of the sensor are introduced. The sensitivity and measure range can be adjusted by utilizing the different materials and sizes of the diaphragm to accommodate different measure environments. The performance experiments are carried out, and the results indicate that the pressure sensitivity of the sensor is 35.7 pm/MPa in a range from 0 MPa to 50 MPa and has good linearity with a linear fitting correlation coefficient of 99.95%. In addition, the sensor has the advantages of low frequency chirp and high stability, which can be used to measure pressure in mining engineering, civil engineering, or other complex environment.
NASA Astrophysics Data System (ADS)
Chikvashvili, Ioseb
2011-10-01
In proposed Concept it is offered to use two ion beams directed coaxially at the same direction but with different velocities (center-of-mass collision energy should be sufficient for fusion), to direct oppositely the relativistic electron beam for only partial compensation of positive space charge and for allowing the combined beam's pinch capability, to apply the longitudinal electric field for compensation of alignment of velocities of reacting particles and also for compensation of energy losses of electrons via Bremsstrahlung. On base of Concept different types of reactor designs can be realized: Linear and Cyclic designs. In the simplest embodiment the Cyclic Reactor (design) may include: betatron type device (circular store of externally injected particles - induction accelerator), pulse high-current relativistic electron injector, pulse high-current slower ion injector, pulse high-current faster ion injector and reaction products extractor. Using present day technologies and materials (or a reasonable extrapolation of those) it is possible to reach: for induction linear injectors (ions&electrons) - currents of thousands A, repeatability - up to 10Hz, the same for high-current betatrons (FFAG, Stellatron, etc.). And it is possible to build the fusion reactor using the proposed Method just today.
NASA Astrophysics Data System (ADS)
Chinowsky, Timothy M.; Yee, Sinclair S.
2002-02-01
Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.
NASA Astrophysics Data System (ADS)
Sauppe, Sebastian; Hahn, Andreas; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc
2016-03-01
We propose an adapted method of our previously published five-dimensional (5D) motion compensation (MoCo) algorithm1, developed for micro-CT imaging of small animals, to provide for the first time motion artifact-free 5D cone-beam CT (CBCT) images from a conventional flat detector-based CBCT scan of clinical patients. Image quality of retrospectively respiratory- and cardiac-gated volumes from flat detector CBCT scans is deteriorated by severe sparse projection artifacts. These artifacts further complicate motion estimation, as it is required for MoCo image reconstruction. For high quality 5D CBCT images at the same x-ray dose and the same number of projections as todays 3D CBCT we developed a double MoCo approach based on motion vector fields (MVFs) for respiratory and cardiac motion. In a first step our already published four-dimensional (4D) artifact-specific cyclic motion-compensation (acMoCo) approach is applied to compensate for the respiratory patient motion. With this information a cyclic phase-gated deformable heart registration algorithm is applied to the respiratory motion-compensated 4D CBCT data, thus resulting in cardiac MVFs. We apply these MVFs on double-gated images and thereby respiratory and cardiac motion-compensated 5D CBCT images are obtained. Our 5D MoCo approach processing patient data acquired with the TrueBeam 4D CBCT system (Varian Medical Systems). Our double MoCo approach turned out to be very efficient and removed nearly all streak artifacts due to making use of 100% of the projection data for each reconstructed frame. The 5D MoCo patient data show fine details and no motion blurring, even in regions close to the heart where motion is fastest.
The compensation of quadrupole errors and space charge effects by using trim quadrupoles
NASA Astrophysics Data System (ADS)
An, YuWen; Wang, Sheng
2011-12-01
The China Spallation Neutron Source (CSNS) accelerators consist of an H-linac and a proton Rapid Cycling Synchrotron (RCS). RCS is designed to accumulate and accelerate proton beam from 80 MeV to 1.6 GeV with a repetition rate of 25 Hz. The main dipole and quadruple magnet will operate in AC mode. Due to the adoption of the resonant power supplies, saturation errors of magnetic field cannot be compensated by power supplies. These saturation errors will disturb the linear optics parameters, such as tunes, beta function and dispersion function. The strong space charge effects will cause emittance growth. The compensation of these effects by using trim quadruples is studied, and the corresponding results are presented.
Compliant tactile sensor that delivers a force vector
NASA Technical Reports Server (NTRS)
Torres-Jara, Eduardo (Inventor)
2010-01-01
Tactile Sensor. The sensor includes a compliant convex surface disposed above a sensor array, the sensor array adapted to respond to deformation of the convex surface to generate a signal related to an applied force vector. The applied force vector has three components to establish the direction and magnitude of an applied force. The compliant convex surface defines a dome with a hollow interior and has a linear relation between displacement and load including a magnet disposed substantially at the center of the dome above a sensor array that responds to magnetic field intensity.
Comparison of Linear and Nonlinear Processing with Acoustic Vector Sensors
2008-09-01
can write the general form of the time invariant vector sensor planewave response as mik rm mv V e = i , (2.21) where mik rxm xmv V e = i , mik rym...ymv V e = i , and mik rzm zmv V e = i . Using the vector geometry defined, the response of each component is defined by cosxm mV V θ= , sin...velocity values relative to the other by the acoustic impedance, ρc, according to Equation (2.19) , e.g. , mik r mpm pm pm Pv V e V cρ = =i
A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia
NASA Astrophysics Data System (ADS)
Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.
2017-08-01
In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.
H2, fixed architecture, control design for large scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1990-01-01
The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.
Classification of subsurface objects using singular values derived from signal frames
Chambers, David H; Paglieroni, David W
2014-05-06
The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.
Stable solutions of inflation driven by vector fields
NASA Astrophysics Data System (ADS)
Emami, Razieh; Mukohyama, Shinji; Namba, Ryo; Zhang, Ying-li
2017-03-01
Many models of inflation driven by vector fields alone have been known to be plagued by pathological behaviors, namely ghost and/or gradient instabilities. In this work, we seek a new class of vector-driven inflationary models that evade all of the mentioned instabilities. We build our analysis on the Generalized Proca Theory with an extension to three vector fields to realize isotropic expansion. We obtain the conditions required for quasi de-Sitter solutions to be an attractor analogous to the standard slow-roll one and those for their stability at the level of linearized perturbations. Identifying the remedy to the existing unstable models, we provide a simple example and explicitly show its stability. This significantly broadens our knowledge on vector inflationary scenarios, reviving potential phenomenological interests for this class of models.
The history of polarisation measurements: their role in studies of magnetic fields
NASA Astrophysics Data System (ADS)
Wielebinski, R.
2015-03-01
Radio astronomy gave us new methods to study magnetic fields. Synchrotron radiation, the main cause of comic radio waves, is highly linearly polarised with the `E' vector normal to the magnetic field. The Faraday Effect rotates the `E' vector in thermal regions by the magnetic field in the line of sight. Also the radio Zeeman Effect has been observed.
The Geometry of Enhancement in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.
2011-01-01
In linear multiple regression, "enhancement" is said to occur when R[superscript 2] = b[prime]r greater than r[prime]r, where b is a p x 1 vector of standardized regression coefficients and r is a p x 1 vector of correlations between a criterion y and a set of standardized regressors, x. When p = 1 then b [is congruent to] r and…
Vector meson photoproduction with a linearly polarized beam
NASA Astrophysics Data System (ADS)
Mathieu, V.; Nys, J.; Fernández-Ramírez, C.; Jackura, A.; Pilloni, A.; Sherrill, N.; Szczepaniak, A. P.; Fox, G.; Joint Physics Analysis Center
2018-05-01
We propose a model based on Regge theory to describe photoproduction of light vector mesons. We fit the SLAC data and make predictions for the energy and momentum-transfer dependence of the spin-density matrix elements in photoproduction of ω , ρ0 and ϕ mesons at Eγ˜8.5 GeV , which are soon to be measured at Jefferson Lab.
NASA Astrophysics Data System (ADS)
Ono, Hiroshi; Kuzuwata, Mitsuru; Sasaki, Tomoyuki; Noda, Kohei; Kawatsuki, Nobuhiro
2014-03-01
The blazed vector grating possessing antisymmetric distributions of the birefringence were fabricated by exposing the line-focused linearly polarized ultraviolet light on the photosensitive polymer liquid crystals. The polarization states of the diffraction beams can be highly and widely controlled by designing the blazed structures, and the diffraction properties were well-explained by Jones calculus.
Cosmology for quadratic gravity in generalized Weyl geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiménez, Jose Beltrán; Heisenberg, Lavinia; Koivisto, Tomi S.
A class of vector-tensor theories arises naturally in the framework of quadratic gravity in spacetimes with linear vector distortion. Requiring the absence of ghosts for the vector field imposes an interesting condition on the allowed connections with vector distortion: the resulting one-parameter family of connections generalises the usual Weyl geometry with polar torsion. The cosmology of this class of theories is studied, focusing on isotropic solutions wherein the vector field is dominated by the temporal component. De Sitter attractors are found and inhomogeneous perturbations around such backgrounds are analysed. In particular, further constraints on the models are imposed by excludingmore » pathologies in the scalar, vector and tensor fluctuations. Various exact background solutions are presented, describing a constant and an evolving dark energy, a bounce and a self-tuning de Sitter phase. However, the latter two scenarios are not viable under a closer scrutiny.« less
Gu, Bing; Xu, Danfeng; Rui, Guanghao; Lian, Meng; Cui, Yiping; Zhan, Qiwen
2015-09-20
Generation of vectorial optical fields with arbitrary polarization distribution is of great interest in areas where exotic optical fields are desired. In this work, we experimentally demonstrate the versatile generation of linearly polarized vector fields, elliptically polarized vector fields, and circularly polarized vortex beams through introducing attenuators in a common-path interferometer. By means of Richards-Wolf vectorial diffraction method, the characteristics of the highly focused elliptically polarized vector fields are studied. The optical force and torque on a dielectric Rayleigh particle produced by these tightly focused vector fields are calculated and exploited for the stable trapping of dielectric Rayleigh particles. It is shown that the additional degree of freedom provided by the elliptically polarized vector field allows one to control the spatial structure of polarization, to engineer the focusing field, and to tailor the optical force and torque on a dielectric Rayleigh particle.
Computational Investigation of Fluidic Counterflow Thrust Vectoring
NASA Technical Reports Server (NTRS)
Hunter, Craig A.; Deere, Karen A.
1999-01-01
A computational study of fluidic counterflow thrust vectoring has been conducted. Two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and linear Reynolds stress modeling. For validation, computational results were compared to experimental data obtained at the NASA Langley Jet Exit Test Facility. In general, computational results were in good agreement with experimental performance data, indicating that efficient thrust vectoring can be obtained with low secondary flow requirements (less than 1% of the primary flow). An examination of the computational flowfield has revealed new details about the generation of a countercurrent shear layer, its relation to secondary suction, and its role in thrust vectoring. In addition to providing new information about the physics of counterflow thrust vectoring, this work appears to be the first documented attempt to simulate the counterflow thrust vectoring problem using computational fluid dynamics.
Analysis of a Linear System for Variable-Thrust Control in the Terminal Phase of Rendezvous
NASA Technical Reports Server (NTRS)
Hord, Richard A.; Durling, Barbara J.
1961-01-01
A linear system for applying thrust to a ferry vehicle in the 3 terminal phase of rendezvous with a satellite is analyzed. This system requires that the ferry thrust vector per unit mass be variable and equal to a suitable linear combination of the measured position and velocity vectors of the ferry relative to the satellite. The variations of the ferry position, speed, acceleration, and mass ratio are examined for several combinations of the initial conditions and two basic control parameters analogous to the undamped natural frequency and the fraction of critical damping. Upon making a desirable selection of one control parameter and requiring minimum fuel expenditure for given terminal-phase initial conditions, a simplified analysis in one dimension practically fixes the choice of the remaining control parameter. The system can be implemented by an automatic controller or by a pilot.
Parks, David R; Roederer, Mario; Moore, Wayne A
2006-06-01
In immunofluorescence measurements and most other flow cytometry applications, fluorescence signals of interest can range down to essentially zero. After fluorescence compensation, some cell populations will have low means and include events with negative data values. Logarithmic presentation has been very useful in providing informative displays of wide-ranging flow cytometry data, but it fails to adequately display cell populations with low means and high variances and, in particular, offers no way to include negative data values. This has led to a great deal of difficulty in interpreting and understanding flow cytometry data, has often resulted in incorrect delineation of cell populations, and has led many people to question the correctness of compensation computations that were, in fact, correct. We identified a set of criteria for creating data visualization methods that accommodate the scaling difficulties presented by flow cytometry data. On the basis of these, we developed a new data visualization method that provides important advantages over linear or logarithmic scaling for display of flow cytometry data, a scaling we refer to as "Logicle" scaling. Logicle functions represent a particular generalization of the hyperbolic sine function with one more adjustable parameter than linear or logarithmic functions. Finally, we developed methods for objectively and automatically selecting an appropriate value for this parameter. The Logicle display method provides more complete, appropriate, and readily interpretable representations of data that includes populations with low-to-zero means, including distributions resulting from fluorescence compensation procedures, than can be produced using either logarithmic or linear displays. The method includes a specific algorithm for evaluating actual data distributions and deriving parameters of the Logicle scaling function appropriate for optimal display of that data. It is critical to note that Logicle visualization does not change the data values or the descriptive statistics computed from them. Copyright 2006 International Society for Analytical Cytology.
A prototype automatic phase compensation module
NASA Technical Reports Server (NTRS)
Terry, John D.
1992-01-01
The growing demands for high gain and accurate satellite communication systems will necessitate the utilization of large reflector systems. One area of concern of reflector based satellite communication is large scale surface deformations due to thermal effects. These distortions, when present, can degrade the performance of the reflector system appreciable. This performance degradation is manifested by a decrease in peak gain, and increase in sidelobe level, and pointing errors. It is essential to compensate for these distortion effects and to maintain the required system performance in the operating space environment. For this reason the development of a technique to offset the degradation effects is highly desirable. Currently, most research is direct at developing better material for the reflector. These materials have a lower coefficient of linear expansion thereby reducing the surface errors. Alternatively, one can minimize the distortion effects of these large scale errors by adaptive phased array compensation. Adaptive phased array techniques have been studied extensively at NASA and elsewhere. Presented in this paper is a prototype automatic phase compensation module designed and built at NASA Lewis Research Center which is the first stage of development for an adaptive array compensation module.
Demonstrating the Direction of Angular Velocity in Circular Motion
NASA Astrophysics Data System (ADS)
Demircioglu, Salih; Yurumezoglu, Kemal; Isik, Hakan
2015-09-01
Rotational motion is ubiquitous in nature, from astronomical systems to household devices in everyday life to elementary models of atoms. Unlike the tangential velocity vector that represents the instantaneous linear velocity (magnitude and direction), an angular velocity vector is conceptually more challenging for students to grasp. In physics classrooms, the direction of an angular velocity vector is taught by the right-hand rule, a mnemonic tool intended to aid memory. A setup constructed for instructional purposes may provide students with a more easily understood and concrete method to observe the direction of the angular velocity. This article attempts to demonstrate the angular velocity vector using the observable motion of a screw mounted to a remotely operated toy car.
A Demons algorithm for image registration with locally adaptive regularization.
Cahill, Nathan D; Noble, J Alison; Hawkes, David J
2009-01-01
Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.
NASA Astrophysics Data System (ADS)
Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.
2017-05-01
Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.
Linearly polarized vector modes: enabling MIMO-free mode-division multiplexing.
Wang, Lixian; Nejad, Reza Mirzaei; Corsi, Alessandro; Lin, Jiachuan; Messaddeq, Younès; Rusch, Leslie; LaRochelle, Sophie
2017-05-15
We experimentally investigate mode-division multiplexing in an elliptical ring core fiber (ERCF) that supports linearly polarized vector modes (LPV). Characterization show that the ERCF exhibits good polarization maintaining properties over eight LPV modes with effective index difference larger than 1 × 10 -4 . The ERCF further displays stable mode power and polarization extinction ratio when subjected to external perturbations. Crosstalk between the LPV modes, after propagating through 0.9 km ERCF, is below -14 dB. By using six LPV modes as independent data channels, we achieved the transmission of 32 Gbaud QPSK over 0.9 km ERCF without any multiple-input-multiple-output (MIMO) or polarization-division multiplexing (PDM) signal processing.
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
Intelligent complementary sliding-mode control for LUSMS-based X-Y-theta motion control stage.
Lin, Faa-Jeng; Chen, Syuan-Yi; Shyu, Kuo-Kai; Liu, Yen-Hung
2010-07-01
An intelligent complementary sliding-mode control (ICSMC) system using a recurrent wavelet-based Elman neural network (RWENN) estimator is proposed in this study to control the mover position of a linear ultrasonic motors (LUSMs)-based X-Y-theta motion control stage for the tracking of various contours. By the addition of a complementary generalized error transformation, the complementary sliding-mode control (CSMC) can efficiently reduce the guaranteed ultimate bound of the tracking error by half compared with the slidingmode control (SMC) while using the saturation function. To estimate a lumped uncertainty on-line and replace the hitting control of the CSMC directly, the RWENN estimator is adopted in the proposed ICSMC system. In the RWENN, each hidden neuron employs a different wavelet function as an activation function to improve both the convergent precision and the convergent time compared with the conventional Elman neural network (ENN). The estimation laws of the RWENN are derived using the Lyapunov stability theorem to train the network parameters on-line. A robust compensator is also proposed to confront the uncertainties including approximation error, optimal parameter vectors, and higher-order terms in Taylor series. Finally, some experimental results of various contours tracking show that the tracking performance of the ICSMC system is significantly improved compared with the SMC and CSMC systems.
NASA Astrophysics Data System (ADS)
Madlazim; Prastowo, T.; Supardiyono; Hardy, T.
2018-03-01
Monitoring of volcanoes has been an important issue for many purposes, particularly hazard mitigation. With regard to this, the aims of the present work are to estimate and analyse source parameters of a volcanic earthquake driven by recent magmatic events of Mount Agung in Bali island that occurred on September 28, 2017. The broadband seismogram data consisting of 3 local component waveforms were recorded by the IA network of 5 seismic stations: SRBI, DNP, BYJI, JAGI, and TWSI (managed by BMKG). These land-based observatories covered a full 4-quadrant region surrounding the epicenter. The methods used in the present study were seismic moment-tensor inversions, where the data were all analyzed to extract the parameters, namely moment magnitude, type of a volcanic earthquake indicated by percentages of seismic components: compensated linear vector dipole (CLVD), isotropic (ISO), double-couple (DC), and source depth. The results are given in the forms of variance reduction of 65%, a magnitude of M W 3.6, a CLVD of 40%, an ISO of 33%, a DC of 27% and a centroid-depth of 9.7 km. These suggest that the unusual earthquake was dominated by a vertical CLVD component, implying the dominance of uplift motion of magmatic fluid flow inside the volcano.
NASA Astrophysics Data System (ADS)
Haellstig, Emil J.; Martin, Torleif; Stigwall, Johan; Sjoqvist, Lars; Lindgren, Mikael
2004-02-01
A commercial linear one-dimensional, 1x4096 pixels, zero-twist nematic liquid crystal spatial light modulator (SLM), giving more than 2π phase modulation at λ = 850 nm, was evaluated for beam steering applications. The large ratio (7:1) between the liquid crystal layer thickness and pixel width gives rise to voltage leakage and fringing fields between pixels. Due to the fringing fields the ideal calculated phase patterns cannot be perfectly realized by the device. Losses in high frequency components in the phase patterns were found to limit the maximum deflection angle. The inhomogeneous optical anisotropy of the SLM was determined by modelling of the liquid crystal director distribution within the electrode-pixel structure. The effects of the fringing fields on the amplitude and phase modulation were studied by full vector finite-difference time-domain simulations. It was found that the fringing fields also resulted in coupling into an unwanted polarization mode. Measurements of how this mode coupling affects the beam steering quality were carried out and the results compared with calculated results. A method to compensate for the fringing field effects is discussed and it is shown how the usable steering range of the SLM can be extended to +/- 2 degrees.
Tkalcic, Hrvoje; Dreger, Douglas S.; Foulger, Gillian R.; Julian, Bruce R.
2009-01-01
A volcanic earthquake with Mw 5.6 occurred beneath the Bárdarbunga caldera in Iceland on 29 September 1996. This earthquake is one of a decade-long sequence of events at Bárdarbunga with non-double-couple mechanisms in the Global Centroid Moment Tensor catalog. Fortunately, it was recorded well by the regional-scale Iceland Hotspot Project seismic experiment. We investigated the event with a complete moment tensor inversion method using regional long-period seismic waveforms and a composite structural model. The moment tensor inversion using data from stations of the Iceland Hotspot Project yields a non-double-couple solution with a 67% vertically oriented compensated linear vector dipole component, a 32% double-couple component, and a statistically insignificant (2%) volumetric (isotropic) contraction. This indicates the absence of a net volumetric component, which is puzzling in the case of a large volcanic earthquake that apparently is not explained by shear slip on a planar fault. A possible volcanic mechanism that can produce an earthquake without a volumetric component involves two offset sources with similar but opposite volume changes. We show that although such a model cannot be ruled out, the circumstances under which it could happen are rare.
TREMOR: A wireless MEMS accelerograph for dense arrays
Evans, J.R.; Hamstra, R.H.; Kundig, C.; Camina, P.; Rogers, J.A.
2005-01-01
The ability of a strong-motion network to resolve wavefields can be described on three axes: frequency, amplitude, and space. While the need for spatial resolution is apparent, for practical reasons that axis is often neglected. TREMOR is a MEMS-based accelerograph using wireless Internet to minimize lifecycle cost. TREMOR instruments can economically augment traditional ones, residing between them to improve spatial resolution. The TREMOR instrument described here has dynamic range of 96 dB between ??2 g, or 102 dB between ??4 g. It is linear to ???1% of full scale (FS), with a response function effectively shaped electronically. We developed an economical, very low noise, accurate (???1%FS) temperature compensation method. Displacement is easily recovered to 10-cm accuracy at full bandwidth, and better with care. We deployed prototype instruments in Oakland, California, beginning in 1998, with 13 now at mean spacing of ???3 km - one of the most densely instrumented urban centers in the United States. This array is among the quickest in returning (PGA, PGV, Sa) vectors to ShakeMap, ???75 to 100 s. Some 13 events have been recorded. A ShakeMap and an example of spatial variability are shown. Extensive tests of the prototypes for a commercial instrument are described here and in a companion paper. ?? 2005, Earthquake Engineering Research Institute.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
NASA Astrophysics Data System (ADS)
Jiang, Weiping; Ma, Jun; Li, Zhao; Zhou, Xiaohui; Zhou, Boye
2018-05-01
The analysis of the correlations between the noise in different components of GPS stations has positive significance to those trying to obtain more accurate uncertainty of velocity with respect to station motion. Previous research into noise in GPS position time series focused mainly on single component evaluation, which affects the acquisition of precise station positions, the velocity field, and its uncertainty. In this study, before and after removing the common-mode error (CME), we performed one-dimensional linear regression analysis of the noise amplitude vectors in different components of 126 GPS stations with a combination of white noise, flicker noise, and random walking noise in Southern California. The results show that, on the one hand, there are above-moderate degrees of correlation between the white noise amplitude vectors in all components of the stations before and after removal of the CME, while the correlations between flicker noise amplitude vectors in horizontal and vertical components are enhanced from un-correlated to moderately correlated by removing the CME. On the other hand, the significance tests show that, all of the obtained linear regression equations, which represent a unique function of the noise amplitude in any two components, are of practical value after removing the CME. According to the noise amplitude estimates in two components and the linear regression equations, more accurate noise amplitudes can be acquired in the two components.
PCA-LBG-based algorithms for VQ codebook generation
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Yang, Po-Yuan
2015-04-01
Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.
Load cell having strain gauges of arbitrary location
Spletzer, Barry [Albuquerque, NM
2007-03-13
A load cell utilizes a plurality of strain gauges mounted upon the load cell body such that there are six independent load-strain relations. Load is determined by applying the inverse of a load-strain sensitivity matrix to a measured strain vector. The sensitivity matrix is determined by performing a multivariate regression technique on a set of known loads correlated to the resulting strains. Temperature compensation is achieved by configuring the strain gauges as co-located orthogonal pairs.
Disequilibrium After Traumatic Brain Injury: Vestibular Mechanisms
2011-09-01
of otolith signal processing, including the integration of head acceleration26 and the disambiguation of linear ac- celeration signals related to tilt ...Foveal versus full-field visual stabilization strategies for translational and rotational head movements. J. Neurosci. 23: 1104–1108. 14. Walker, M.F., M...in the vestibular reflexes that compensate for linear movements of the head and body during standing and walking. The experimental protocol has two
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
Effect of stride length on overarm throwing delivery: A linear momentum response.
Ramsey, Dan K; Crotin, Ryan L; White, Scott
2014-12-01
Changing stride length during overhand throwing delivery is thought to alter total body and throwing arm linear momentums, thereby altering the proportion of throwing arm momentum relative to the total body. Using a randomized cross-over design, nineteen pitchers (15 collegiate and 4 high school) were assigned to pitch two simulated 80-pitch games at ±25% of their desired stride length. An 8-camera motion capture system (240Hz) integrated with two force plates (960Hz) and radar gun tracked each throw. Segmental linear momentums in each plane of motion were summed yielding throwing arm and total body momentums, from which compensation ratio's (relative contribution between the two) were derived. Pairwise comparisons at hallmark events and phases identified significantly different linear momentum profiles, in particular, anteriorly directed total body, throwing arm, and momentum compensation ratios (P⩽.05) as a result of manipulating stride length. Pitchers with shorter strides generated lower forward (anterior) momentum before stride foot contact, whereas greater upward and lateral momentum (toward third base) were evident during the acceleration phase. The evidence suggests insufficient total body momentum in the intended throwing direction may potentially influence performance (velocity and accuracy) and perhaps precipitate throwing arm injuries. Copyright © 2014 Elsevier B.V. All rights reserved.
Plasmonic micropolarizers for full Stokes vector imaging
NASA Astrophysics Data System (ADS)
Peltzer, J. J.; Bachman, K. A.; Rose, J. W.; Flammer, P. D.; Furtak, T. E.; Collins, R. T.; Hollingsworth, R. E.
2012-06-01
Polarimetric imaging using micropolarizers integrated on focal plane arrays has previously been limited to the linear components of the Stokes vector because of the lack of an effective structure with selectivity to circular polarization. We discuss a plasmonic micropolarizing filter that can be tuned for linear or circular polarization as well as wavelength selectivity from blue to infrared (IR) through simple changes in its horizontal geometry. The filter consists of a patterned metal film with an aperture in a central cavity that is surrounded by gratings that couple to incoming light. The aperture and gratings are covered with a transparent dielectric layer to form a surface plasmon slab waveguide. A metal cap covers the aperture and forms a metal-insulator-metal (MIM) waveguide. Structures with linear apertures and gratings provide sensitivity to linear polarization, while structures with circular apertures and spiral gratings give circular polarization selectivity. Plasmonic TM modes are transmitted down the MIM waveguide while the TE modes are cut off due to the sub-wavelength dielectric thickness, providing the potential for extremely high extinction ratios. Experimental results are presented for micropolarizers fabricated on glass or directly into the Ohmic contact metallization of silicon photodiodes. Extinction ratios for linear polarization larger than 3000 have been measured.
NASA Astrophysics Data System (ADS)
Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier
2017-12-01
Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.
Process fault detection and nonlinear time series analysis for anomaly detection in safeguards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, T.L.; Mullen, M.F.; Wangen, L.E.
In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
SEMIPARAMETRIC QUANTILE REGRESSION WITH HIGH-DIMENSIONAL COVARIATES
Zhu, Liping; Huang, Mian; Li, Runze
2012-01-01
This paper is concerned with quantile regression for a semiparametric regression model, in which both the conditional mean and conditional variance function of the response given the covariates admit a single-index structure. This semiparametric regression model enables us to reduce the dimension of the covariates and simultaneously retains the flexibility of nonparametric regression. Under mild conditions, we show that the simple linear quantile regression offers a consistent estimate of the index parameter vector. This is a surprising and interesting result because the single-index model is possibly misspecified under the linear quantile regression. With a root-n consistent estimate of the index vector, one may employ a local polynomial regression technique to estimate the conditional quantile function. This procedure is computationally efficient, which is very appealing in high-dimensional data analysis. We show that the resulting estimator of the quantile function performs asymptotically as efficiently as if the true value of the index vector were known. The methodologies are demonstrated through comprehensive simulation studies and an application to a real dataset. PMID:24501536
NASA Technical Reports Server (NTRS)
Benson, A. J.; Barnes, G. R.
1973-01-01
Human subjects were exposed to a linear acceleration vector that rotated in the transverse plane of the skull without angular counterrotation. Lateral eye movements showed a sinusoidal change in slow phase velocity and an asymmetry or bias in the same direction as vector rotation. A model is developed that attributes the oculomotor response to otolithic mechanisms. It is suggested that the bias component is the manifestation of torsion of the statoconial plaque relative to the base of the utricular macula and that the sinusoidal component represents the translational oscillation of the statoconia. The model subsumes a hypothetical neural mechanism which allows x- and y-axis accelerations to be resolved. Derivation of equations of motion for the statoconial plaque in torsion and translation, which take into account forces acting in shear and normal to the macula, yield estimates of bias and sinusoidal components that are in qualitative agreement with the diverse experimental findings.
NASA Technical Reports Server (NTRS)
Gettman, Chang-Ching LO
1993-01-01
This thesis develops and demonstrates an approach to nonlinear control system design using linearization by state feedback. The design provides improved transient response behavior allowing faster maneuvering of payloads by the SRMS. Modeling uncertainty is accounted for by using a second feedback loop designed around the feedback linearized dynamics. A classical feedback loop is developed to provide the easy implementation required for the relatively small on board computers. Feedback linearization also allows the use of higher bandwidth model based compensation in the outer loop, since it helps maintain stability in the presence of the nonlinearities typically neglected in model based designs.
Application of Design Methodologies for Feedback Compensation Associated with Linear Systems
NASA Technical Reports Server (NTRS)
Smith, Monty J.
1996-01-01
The work that follows is concerned with the application of design methodologies for feedback compensation associated with linear systems. In general, the intent is to provide a well behaved closed loop system in terms of stability and robustness (internal signals remain bounded with a certain amount of uncertainty) and simultaneously achieve an acceptable level of performance. The approach here has been to convert the closed loop system and control synthesis problem into the interpolation setting. The interpolation formulation then serves as our mathematical representation of the design process. Lifting techniques have been used to solve the corresponding interpolation and control synthesis problems. Several applications using this multiobjective design methodology have been included to show the effectiveness of these techniques. In particular, the mixed H 2-H performance criteria with algorithm has been used on several examples including an F-18 HARV (High Angle of Attack Research Vehicle) for sensitivity performance.
Todorović, Dejan
2008-01-01
Every image of a scene produced in accord with the rules of linear perspective has an associated projection centre. Only if observed from that position does the image provide the stimulus which is equivalent to the one provided by the original scene. According to the perspective-transformation hypothesis, observing the image from other vantage points should result in specific transformations of the structure of the conveyed scene, whereas according to the vantage-point compensation hypothesis it should have little effect. Geometrical analyses illustrating the transformation theory are presented. An experiment is reported to confront the two theories. The results provide little support for the compensation theory and are generally in accord with the transformation theory, but also show systematic deviations from it, possibly due to cue conflict and asymmetry of visual angles.
NASA Astrophysics Data System (ADS)
Tian, Wugang; Hu, Jiafei; Pan, Mengchun; Chen, Dixiang; Zhao, Jianqiang
2013-03-01
1/f noise is one of the main noise sources of magnetoresistive (MR) sensors, which can cause intrinsic detection limit at low frequency. To suppress this noise, the solution of flux concentration and vertical motion modulation (VMM) has been proposed. Magnetic hysteresis in MR sensors is another problem, which degrades their response linearity and detection ability. To reduce this impact, the method of pulse magnetization and magnetic compensation field with integrated planar coils has been introduced. A flux concentration and VMM based magnetoresistive prototype sensor with integrated planar coils was fabricated using microelectromechanical-system technology. The response linearity of the prototype sensors is improved from 0.8% to 0.12%. The noise level is reduced near to the thermal noise level, and the low-frequency detection ability of the prototype sensor is enhanced with a factor of more than 80.
Li, YuHui; Jin, FeiTeng
2017-01-01
The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680
Single link flexible beam testbed project. Thesis
NASA Technical Reports Server (NTRS)
Hughes, Declan
1992-01-01
This thesis describes the single link flexible beam testbed at the CLaMS laboratory in terms of its hardware, software, and linear model, and presents two controllers, each including a hub angle proportional-derivative (PD) feedback compensator and one augmented by a second static gain full state feedback loop, based upon a synthesized strictly positive real (SPR) output, that increases specific flexible mode pole damping ratios w.r.t the PD only case and hence reduces unwanted residual oscillation effects. Restricting full state feedback gains so as to produce a SPR open loop transfer function ensures that the associated compensator has an infinite gain margin and a phase margin of at least (-90, 90) degrees. Both experimental and simulation data are evaluated in order to compare some different observer performance when applied to the real testbed and to the linear model when uncompensated flexible modes are included.
Chen, Wen-Yuan; Wang, Mei; Fu, Zhou-Xing
2014-06-16
Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1) we use a terrain drop compensation (TDC) technique to solve the problem of the concavity of railway crossings; (2) we use a linear regression technique to predict the position and length of an object from image processing; (3) we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP) to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas.
Chen, Wen-Yuan; Wang, Mei; Fu, Zhou-Xing
2014-01-01
Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1) we use a terrain drop compensation (TDC) technique to solve the problem of the concavity of railway crossings; (2) we use a linear regression technique to predict the position and length of an object from image processing; (3) we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP) to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas. PMID:24936948
Microcomputer-based system for registration of oxygen tension in peripheral muscle.
Odman, S; Bratt, H; Erlandsson, I; Sjögren, L
1986-01-01
For registration of oxygen tension fields in peripheral muscle a microcomputer based system was designed on the M6800 microprocessor. The system was designed to record the signals from a multiwire oxygen electrode, MDO, which is a multiwire electrode for measuring oxygen on the surface of an organ. The system contained patient safety isolation unit built on optocopplers and the upper frequency limit was 0.64 Hz. Collected data were corrected for drift and temperature changes during the measurement by using pre- and after calibrations and a linear compensation technique. Measure drift of the electrodes were proved to be linear and thus the drift could be compensated for. The system was tested in an experiment on pig. To study the distribution of oxygen statistically mean, standard deviation, skewness and curtosis were calculated. To see changes or differences between histograms a Kolmogorv-Smirnov test was used.
Ting, Lai-Lei; Chuang, Ho-Chiao; Liao, Ai-Ho; Kuo, Chia-Chun; Yu, Hsiao-Wei; Zhou, Yi-Liang; Tien, Der-Chi; Jeng, Shiu-Chen; Chiou, Jeng-Fong
2018-05-01
This study proposed respiratory motion compensation system (RMCS) combined with an ultrasound image tracking algorithm (UITA) to compensate for respiration-induced tumor motion during radiotherapy, and to address the problem of inaccurate radiation dose delivery caused by respiratory movement. This study used an ultrasound imaging system to monitor respiratory movements combined with the proposed UITA and RMCS for tracking and compensation of the respiratory motion. Respiratory motion compensation was performed using prerecorded human respiratory motion signals and also sinusoidal signals. A linear accelerator was used to deliver radiation doses to GAFchromic EBT3 dosimetry film, and the conformity index (CI), root-mean-square error, compensation rate (CR), and planning target volume (PTV) were used to evaluate the tracking and compensation performance of the proposed system. Human respiratory pattern signals were captured using the UITA and compensated by the RMCS, which yielded CR values of 34-78%. In addition, the maximum coronal area of the PTV ranged from 85.53 mm 2 to 351.11 mm 2 (uncompensated), which reduced to from 17.72 mm 2 to 66.17 mm 2 after compensation, with an area reduction ratio of up to 90%. In real-time monitoring of the respiration compensation state, the CI values for 85% and 90% isodose areas increased to 0.7 and 0.68, respectively. The proposed UITA and RMCS can reduce the movement of the tracked target relative to the LINAC in radiation therapy, thereby reducing the required size of the PTV margin and increasing the effect of the radiation dose received by the treatment target. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Feature-space-based FMRI analysis using the optimal linear transformation.
Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S
2010-09-01
The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.
Simulations of linear and Hamming codes using SageMath
NASA Astrophysics Data System (ADS)
Timur, Tahta D.; Adzkiya, Dieky; Soleha
2018-03-01
Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.
Theoretical proposal for determining angular momentum compensation in ferrimagnets
NASA Astrophysics Data System (ADS)
Zhu, Zhifeng; Fong, Xuanyao; Liang, Gengchiau
2018-05-01
This work demonstrates that the magnetization and angular momentum compensation temperatures (TMC and TAMC) in ferrimagnets can be unambiguously determined by performing two sets of temperature-dependent current switching, with the symmetry reversals at TMC and TAMC, respectively. A theoretical model based on the modified Landau-Lifshitz-Bloch equation is developed to systematically study the spin torque effect under different temperatures, and numerical simulations are performed to corroborate our proposal. Furthermore, we demonstrate that the recently reported linear relation between TAMC and TMC can be explained using the Curie-Weiss theory.
The mathematical origins of the kinetic compensation effect: 2. The effect of systematic errors.
Barrie, Patrick J
2012-01-07
The kinetic compensation effect states that there is a linear relationship between Arrhenius parameters ln A and E for a family of related processes. It is a widely observed phenomenon in many areas of science, notably heterogeneous catalysis. This paper explores mathematical, rather than physicochemical, explanations for the compensation effect in certain situations. Three different topics are covered theoretically and illustrated by examples. Firstly, the effect of systematic errors in experimental kinetic data is explored, and it is shown that these create apparent compensation effects. Secondly, analysis of kinetic data when the Arrhenius parameters depend on another parameter is examined. In the case of temperature programmed desorption (TPD) experiments when the activation energy depends on surface coverage, it is shown that a common analysis method induces a systematic error, causing an apparent compensation effect. Thirdly, the effect of analysing the temperature dependence of an overall rate of reaction, rather than a rate constant, is investigated. It is shown that this can create an apparent compensation effect, but only under some conditions. This result is illustrated by a case study for a unimolecular reaction on a catalyst surface. Overall, the work highlights the fact that, whenever a kinetic compensation effect is observed experimentally, the possibility of it having a mathematical origin should be carefully considered before any physicochemical conclusions are drawn.
Hashimoto, Ken; Zúniga, Concepción; Romero, Eduardo; Morales, Zoraida; Maguire, James H.
2015-01-01
Background Central American countries face a major challenge in the control of Triatoma dimidiata, a widespread vector of Chagas disease that cannot be eliminated. The key to maintaining the risk of transmission of Trypanosoma cruzi at lowest levels is to sustain surveillance throughout endemic areas. Guatemala, El Salvador, and Honduras integrated community-based vector surveillance into local health systems. Community participation was effective in detection of the vector, but some health services had difficulty sustaining their response to reports of vectors from the population. To date, no research has investigated how best to maintain and reinforce health service responsiveness, especially in resource-limited settings. Methodology/Principal Findings We reviewed surveillance and response records of 12 health centers in Guatemala, El Salvador, and Honduras from 2008 to 2012 and analyzed the data in relation to the volume of reports of vector infestation, local geography, demography, human resources, managerial approach, and results of interviews with health workers. Health service responsiveness was defined as the percentage of households that reported vector infestation for which the local health service provided indoor residual spraying of insecticide or educational advice. Eight potential determinants of responsiveness were evaluated by linear and mixed-effects multi-linear regression. Health service responsiveness (overall 77.4%) was significantly associated with quarterly monitoring by departmental health offices. Other potential determinants of responsiveness were not found to be significant, partly because of short- and long-term strategies, such as temporary adjustments in manpower and redistribution of tasks among local participants in the effort. Conclusions/Significance Consistent monitoring within the local health system contributes to sustainability of health service responsiveness in community-based vector surveillance of Chagas disease. Even with limited resources, countries can improve health service responsiveness with thoughtful strategies and management practices in the local health systems. PMID:26252767
System balance analysis for vector computers
NASA Technical Reports Server (NTRS)
Knight, J. C.; Poole, W. G., Jr.; Voight, R. G.
1975-01-01
The availability of vector processors capable of sustaining computing rates of 10 to the 8th power arithmetic results pers second raised the question of whether peripheral storage devices representing current technology can keep such processors supplied with data. By examining the solution of a large banded linear system on these computers, it was found that even under ideal conditions, the processors will frequently be waiting for problem data.
Three Interpretations of the Matrix Equation Ax = b
ERIC Educational Resources Information Center
Larson, Christine; Zandieh, Michelle
2013-01-01
Many of the central ideas in an introductory undergraduate linear algebra course are closely tied to a set of interpretations of the matrix equation Ax = b (A is a matrix, x and b are vectors): linear combination interpretations, systems interpretations, and transformation interpretations. We consider graphic and symbolic representations for each,…
String theory origin of constrained multiplets
NASA Astrophysics Data System (ADS)
Kallosh, Renata; Vercnocke, Bert; Wrase, Timm
2016-09-01
We study the non-linearly realized spontaneously broken supersymmetry of the (anti-)D3-brane action in type IIB string theory. The worldvolume fields are one vector A μ , three complex scalars ϕ i and four 4d fermions λ 0, λ i. These transform, in addition to the more familiar {N}=4 linear supersymmetry, also under 16 spontaneously broken, non-linearly realized supersymmetries. We argue that the worldvolume fields can be packaged into the following constrained 4d non-linear {N}=1 multiplets: four chiral multiplets S, Y i that satisfy S 2 = SY i =0 and contain the worldvolume fermions λ 0 and λ i ; and four chiral multiplets W α , H i that satisfy S{W}_{α }=S{overline{D}}_{overset{\\cdotp }{α }}{overline{H}}^{overline{imath}}=0 and contain the vector A μ and the scalars ϕ i . We also discuss how placing an anti-D3-brane on top of intersecting O7-planes can lead to an orthogonal multiplet Φ that satisfies S(Φ -overline{Φ})=0 , which is particularly interesting for inflationary cosmology.
Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.
Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo
2015-08-01
Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.
Re-engineering adenovirus vector systems to enable high-throughput analyses of gene function.
Stanton, Richard J; McSharry, Brian P; Armstrong, Melanie; Tomasec, Peter; Wilkinson, Gavin W G
2008-12-01
With the enhanced capacity of bioinformatics to interrogate extensive banks of sequence data, more efficient technologies are needed to test gene function predictions. Replication-deficient recombinant adenovirus (Ad) vectors are widely used in expression analysis since they provide for extremely efficient expression of transgenes in a wide range of cell types. To facilitate rapid, high-throughput generation of recombinant viruses, we have re-engineered an adenovirus vector (designated AdZ) to allow single-step, directional gene insertion using recombineering technology. Recombineering allows for direct insertion into the Ad vector of PCR products, synthesized sequences, or oligonucleotides encoding shRNAs without requirement for a transfer vector Vectors were optimized for high-throughput applications by making them "self-excising" through incorporating the I-SceI homing endonuclease into the vector removing the need to linearize vectors prior to transfection into packaging cells. AdZ vectors allow genes to be expressed in their native form or with strep, V5, or GFP tags. Insertion of tetracycline operators downstream of the human cytomegalovirus major immediate early (HCMV MIE) promoter permits silencing of transgenes in helper cells expressing the tet repressor thus making the vector compatible with the cloning of toxic gene products. The AdZ vector system is robust, straightforward, and suited to both sporadic and high-throughput applications.
Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang
2015-01-01
This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes. PMID:25815450
Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang
2015-03-25
This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes--the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC--were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes.
Design of a dual linear polarization antenna using split ring resonators at X-band
NASA Astrophysics Data System (ADS)
Ahmed, Sadiq; Chandra, Madhukar
2017-11-01
Dual linear polarization microstrip antenna configurations are very suitable for high-performance satellites, wireless communication and radar applications. This paper presents a new method to improve the co-cross polarization discrimination (XPD) for dual linear polarized microstrip antennas at 10 GHz. For this, three various configurations of a dual linear polarization antenna utilizing metamaterial unit cells are shown. In the first layout, the microstrip patch antenna is loaded with two pairs of spiral ring resonators, in the second model, a split ring resonator is placed between two microstrip feed lines, and in the third design, a complementary split ring resonators are etched in the ground plane. This work has two primary goals: the first is related to the addition of metamaterial unit cells to the antenna structure which permits compensation for an asymmetric current distribution flow on the microstrip antenna and thus yields a symmetrical current distribution on it. This compensation leads to an important enhancement in the XPD in comparison to a conventional dual linear polarized microstrip patch antenna. The simulation reveals an improvement of 7.9, 8.8, and 4 dB in the E and H planes for the three designs, respectively, in the XPD as compared to the conventional dual linear polarized patch antenna. The second objective of this paper is to present the characteristics and performances of the designs of the spiral ring resonator (S-RR), split ring resonator (SRR), and complementary split ring resonator (CSRR) metamaterial unit cells. The simulations are evaluated using the commercial full-wave simulator, Ansoft High-Frequency Structure Simulator (HFSS).
Wave Telescope Technique for MMS Magnetometer
NASA Technical Reports Server (NTRS)
Narita, Y.; Plaschke, F.; Nakamura, R.; Baumjojann, W.; Magnes, W.; Fischer, D.; Voros, Z.; Torbert, R. B.; Russell, C. T.; Strangeway, R. J.;
2016-01-01
Multipoint measurements are a powerful method in studying wavefields in space plasmas.The wave telescope technique is tested against magnetic field fluctuations in the terrestrial magnetosheath measured by the four Magnetospheric Multiscale (MMS) spacecraft on a spatial scale of about 20 km.The dispersion relation diagram and the wave vector distribution are determined for the first time in the ion-kinetic range. Moreover, the dispersion relation diagram is determined in a proxy plasma restframe by regarding the low-frequency dispersion relation as a Doppler relation and compensating for the apparent phase velocity. Fluctuations are highly compressible, and the wave vectors have an angle of about 60 from the mean magnetic field. We interpret that the measured fluctuations represent akinetic-drift mirror mode in the magnetosheath which is dispersive and in a turbulent state accompanied by a sideband formation.
Cho, HyunGi; Yeon, Suyong; Choi, Hyunga; Doh, Nakju
2018-01-01
In a group of general geometric primitives, plane-based features are widely used for indoor localization because of their robustness against noises. However, a lack of linearly independent planes may lead to a non-trivial estimation. This in return can cause a degenerate state from which all states cannot be estimated. To solve this problem, this paper first proposed a degeneracy detection method. A compensation method that could fix orientations by projecting an inertial measurement unit’s (IMU) information was then explained. Experiments were conducted using an IMU-Kinect v2 integrated sensor system prone to fall into degenerate cases owing to its narrow field-of-view. Results showed that the proposed framework could enhance map accuracy by successful detection and compensation of degenerated orientations. PMID:29565287
Douglas, David R; Tennant, Christopher
2015-11-10
A modulated-bending recirculating system that avoids CSR-driven breakdown in emittance compensation by redistributing the bending along the beamline. The modulated-bending recirculating system includes a) larger angles of bending in initial FODO cells, thereby enhancing the impact of CSR early on in the beam line while the bunch is long, and 2) a decreased bending angle in the final FODO cells, reducing the effect of CSR while the bunch is short. The invention describes a method for controlling the effects of CSR during recirculation and bunch compression including a) correcting chromatic aberrations, b) correcting lattice and CSR-induced curvature in the longitudinal phase space by compensating T.sub.566, and c) using lattice perturbations to compensate obvious linear correlations x-dp/p and x'-dp/p.
Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models
1998-03-01
for phase distortions due to noise which leads to less deblurring as noise increases [41]. In contrast, the vector Wiener filter incorporates some a...AFIT/DS/ENG/98- 06 Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models DISSERTATION Stephen D. Ford Captain...Dissertation 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS LINEAR RECONSTRUCTION OF NON-STATIONARY IMAGE ENSEMBLES INCORPORATING BLUR AND NOISE MODELS 6. AUTHOR(S
Novel method of finding extreme edges in a convex set of N-dimension vectors
NASA Astrophysics Data System (ADS)
Hu, Chia-Lun J.
2001-11-01
As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.
Noid, W. G.; Liu, Pu; Wang, Yanting; Chu, Jhih-Wei; Ayton, Gary S.; Izvekov, Sergei; Andersen, Hans C.; Voth, Gregory A.
2008-01-01
The multiscale coarse-graining (MS-CG) method [S. Izvekov and G. A. Voth, J. Phys. Chem. B 109, 2469 (2005);J. Chem. Phys. 123, 134105 (2005)] employs a variational principle to determine an interaction potential for a CG model from simulations of an atomically detailed model of the same system. The companion paper proved that, if no restrictions regarding the form of the CG interaction potential are introduced and if the equilibrium distribution of the atomistic model has been adequately sampled, then the MS-CG variational principle determines the exact many-body potential of mean force (PMF) governing the equilibrium distribution of CG sites generated by the atomistic model. In practice, though, CG force fields are not completely flexible, but only include particular types of interactions between CG sites, e.g., nonbonded forces between pairs of sites. If the CG force field depends linearly on the force field parameters, then the vector valued functions that relate the CG forces to these parameters determine a set of basis vectors that span a vector subspace of CG force fields. The companion paper introduced a distance metric for the vector space of CG force fields and proved that the MS-CG variational principle determines the CG force force field that is within that vector subspace and that is closest to the force field determined by the many-body PMF. The present paper applies the MS-CG variational principle for parametrizing molecular CG force fields and derives a linear least squares problem for the parameter set determining the optimal approximation to this many-body PMF. Linear systems of equations for these CG force field parameters are derived and analyzed in terms of equilibrium structural correlation functions. Numerical calculations for a one-site CG model of methanol and a molecular CG model of the EMIM+∕NO3− ionic liquid are provided to illustrate the method. PMID:18601325
Integrated flight/propulsion control system design based on a centralized approach
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Mattern, Duane L.; Bullard, Randy E.
1989-01-01
An integrated flight/propulsion control system design is presented for the piloted longitudinal landing task with a modern, statically unstable, fighter aircraft. A centralized compensator based on the Linear Quadratic Gaussian/Loop Transfer Recovery methodology is first obtained to satisfy the feedback loop performance and robustness specificiations. This high-order centralized compensator is then partitioned into airframe and engine sub-controllers based on modal controllability/observability for the compensator modes. The order of the sub-controllers is then reduced using internally-balanced realization techniques and the sub-controllers are simplified by neglecting the insignificant feedbacks. These sub-controllers have the advantage that they can be implemented as separate controllers on the airframe and the engine while still retaining the important performance and stability characteristics of the full-order centralized compensator. Command prefilters are then designed for the closed-loop system with the simplified sub-controllers to obtain the desired system response to airframe and engine command inputs, and the overall system performance evaluation results are presented.
Fiber-optical sensor with intensity compensation model in college teaching of physics experiment
NASA Astrophysics Data System (ADS)
Su, Liping; Zhang, Yang; Li, Kun; Zhang, Yu
2017-08-01
Optical fiber sensor technology is one of the main contents of modern information technology, which has a very important position in modern science and technology. Fiber optic sensor experiment can improve students' enthusiasm and broaden their horizons in college physics experiment. In this paper the main structure and working principle of fiberoptical sensor with intensity compensation model are introduced. And thus fiber-optical sensor with intensity compensation model is applied to measure micro displacement of Young's modulus measurement experiment and metal linear expansion coefficient measurement experiment in the college physics experiment. Results indicate that the measurement accuracy of micro displacement is higher than that of the traditional methods using fiber-optical sensor with intensity compensation model. Meanwhile this measurement method makes the students understand on the optical fiber, sensor and nature of micro displacement measurement method and makes each experiment strengthen relationship and compatibility, which provides a new idea for the reform of experimental teaching.
Laser diode bars based on strain-compensated AlGaPAs/GaAs heterostructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marmalyuk, Aleksandr A; Ladugin, M A; Yarotskaya, I V
2012-01-31
Traditional (in the AlGaAs/GaAs system) and phosphorus-compensated (in the AlGaAs/AlGaPAs/GaAs system) laser heterostructures emitting at a wavelength of 850 nm are grown by MOVPE and studied. Laser diode bars are fabricated and their output characteristics are studied. The method used to grow heterolayers allowed us to control (minimise) mechanical stresses in the AlGaPAs/GaAs laser heterostructure, which made it possible to keep its curvature at the level of the initial curvature of the substrate. It is shown that the use of a compensated AlGaPAs/GaAs heterostructure improves the linear distribution of emitting elements in the near field of laser diode arrays andmore » allows the power - current characteristic to retain its slope at high pump currents owing to a uniform contact of all emitting elements with the heat sink. The radius of curvature of the grown compensated heterostructures turns out to be smaller than that of traditional heterostructures.« less
NASA Astrophysics Data System (ADS)
Dorodnitsyn, Vladimir A.; Kozlov, Roman; Meleshko, Sergey V.; Winternitz, Pavel
2018-05-01
A recent article was devoted to an analysis of the symmetry properties of a class of first-order delay ordinary differential systems (DODSs). Here we concentrate on linear DODSs, which have infinite-dimensional Lie point symmetry groups due to the linear superposition principle. Their symmetry algebra always contains a two-dimensional subalgebra realized by linearly connected vector fields. We identify all classes of linear first-order DODSs that have additional symmetries, not due to linearity alone, and we present representatives of each class. These additional symmetries are then used to construct exact analytical particular solutions using symmetry reduction.
Resultant as the determinant of a Koszul complex
NASA Astrophysics Data System (ADS)
Anokhina, A. S.; Morozov, A. Yu.; Shakirov, Sh. R.
2009-09-01
The determinant is a very important characteristic of a linear map between vector spaces. Two generalizations of linear maps are intensively used in modern theory: linear complexes (nilpotent chains of linear maps) and nonlinear maps. The determinant of a complex and the resultant are then the corresponding generalizations of the determinant of a linear map. It turns out that these two quantities are related: the resultant of a nonlinear map is the determinant of the corresponding Koszul complex. We give an elementary introduction into these notions and relations, which will definitely play a role in the future development of theoretical physics.
Cardano, Filippo; Karimi, Ebrahim; Slussarenko, Sergei; Marrucci, Lorenzo; de Lisio, Corrado; Santamato, Enrico
2012-04-01
We describe the polarization topology of the vector beams emerging from a patterned birefringent liquid crystal plate with a topological charge q at its center (q-plate). The polarization topological structures for different q-plates and different input polarization states have been studied experimentally by measuring the Stokes parameters point-by-point in the beam transverse plane. Furthermore, we used a tuned q=1/2-plate to generate cylindrical vector beams with radial or azimuthal polarizations, with the possibility of switching dynamically between these two cases by simply changing the linear polarization of the input beam.
Primer Vector Optimization: Survey of Theory, New Analysis and Applications
NASA Technical Reports Server (NTRS)
Guzman, J. J.; Mailhe, L. M.; Schiff, C.; Hughes, S. P.; Folta, D. C.
2002-01-01
In this paper, a summary of primer vector theory is presented. The applicability of primer vector theory is examined in an effort to understand when and why the theory can fail. For example, since the Calculus of Variations is based on "small" variations, singularities in the linearized (variational) equations of motion along the arcs must be taken into account. These singularities are a recurring problem in analyse that employ small variations. Two examples, the initialization of an orbit and a line of apsides rotation, are presented. Recommendations, future work, and the possible addition of other optimization techniques are also discussed.
Design of vaccination and fumigation on Host-Vector Model by input-output linearization method
NASA Astrophysics Data System (ADS)
Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning
2017-03-01
Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.
Design and simulation of MEMS vector hydrophone with reduced cross section based meander beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Manoj; Dutta, S.; Pal, Ramjay
MEMS based vector hydrophone is being one of the key device in the underwater communications. In this paper, we presented a bio-inspired MEMS vector hydrophone. The hydrophone structure consists of a proof mass suspended by four meander type beams with reduced cross-section. Modal patterns of the structure were studied. First three modal frequencies of the hydrophone structure were found to be 420 Hz, 420 Hz and 1646 Hz respectively. The deflection and stress of the hydrophone is found have linear behavior in the 1 µPa – 1Pa pressure range.
Generation of cylindrically polarized vector vortex beams with digital micromirror device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Lei; Liu, Weiwei; Wang, Meng
We propose a novel technique to directly transform a linearly polarized Gaussian beam into vector-vortex beams with various spatial patterns. Full high-quality control of amplitude and phase is implemented via a Digital Micro-mirror Device (DMD) binary holography for generating Laguerre-Gaussian, Bessel-Gaussian, and helical Mathieu–Gaussian modes, while a radial polarization converter (S-waveplate) is employed to effectively convert the optical vortices into cylindrically polarized vortex beams. Additionally, the generated vector-vortex beams maintain their polarization symmetry after arbitrary polarization manipulation. Due to the high frame rates of DMD, rapid switching among a series of vector modes carrying different orbital angular momenta paves themore » way for optical microscopy, trapping, and communication.« less
Managing focal fields of vector beams with multiple polarization singularities.
Han, Lei; Liu, Sheng; Li, Peng; Zhang, Yi; Cheng, Huachao; Gan, Xuetao; Zhao, Jianlin
2016-11-10
We explore the tight focusing behavior of vector beams with multiple polarization singularities, and analyze the influences of the number, position, and topological charge of the singularities on the focal fields. It is found that the ellipticity of the local polarization states at the focal plane could be determined by the spatial distribution of the polarization singularities of the vector beam. When the spatial location and topological charge of singularities have even-fold rotation symmetry, the transverse fields at the focal plane are locally linearly polarized. Otherwise, the polarization state becomes a locally hybrid one. By appropriately arranging the distribution of the polarization singularities in the vector beam, the polarization distributions of the focal fields could be altered while the intensity maintains unchanged.
Thrust vectoring for lateral-directional stability
NASA Technical Reports Server (NTRS)
Peron, Lee R.; Carpenter, Thomas
1992-01-01
The advantages and disadvantages of using thrust vectoring for lateral-directional control and the effects of reducing the tail size of a single-engine aircraft were investigated. The aerodynamic characteristics of the F-16 aircraft were generated by using the Aerodynamic Preliminary Analysis System II panel code. The resulting lateral-directional linear perturbation analysis of a modified F-16 aircraft with various tail sizes and yaw vectoring was performed at several speeds and altitudes to determine the stability and control trends for the aircraft compared to these trends for a baseline aircraft. A study of the paddle-type turning vane thrust vectoring control system as used on the National Aeronautics and Space Administration F/A-18 High Alpha Research Vehicle is also presented.
Generation of vector beams using a double-wedge depolarizer: Non-quantum entanglement
NASA Astrophysics Data System (ADS)
Samlan, C. T.; Viswanathan, Nirmal K.
2016-07-01
Propagation of horizontally polarized Gaussian beam through a double-wedge depolarizer generates vector beams with spatially varying state of polarization. Jones calculus is used to show that such beams are maximally nonseparable on the basis of even (Gaussian)-odd (Hermite-Gaussian) mode parity and horizontal-vertical polarization state. The maximum nonseparability in the two degrees of freedom of the vector beam at the double wedge depolarizer output is verified experimentally using a modified Sagnac interferometer and linear analyser projected interferograms to measure the concurrence 0.94±0.002 and violation of Clauser-Horne-Shimony-Holt form of Bell-like inequality 2.704±0.024. The investigation is carried out in the context of the use of vector beams for metrological applications.
Image and Video Compression with VLSI Neural Networks
NASA Technical Reports Server (NTRS)
Fang, W.; Sheu, B.
1993-01-01
An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.
Tunable overlapping long-period fiber grating and its bending vector sensing application
NASA Astrophysics Data System (ADS)
Hu, Wei; Zhang, Weigang; Chen, Lei; Wang, Song; Zhang, Yunshan; Zhang, Yanxin; Kong, Lingxin; Yu, Lin; Yan, Tieyi; Li, Yanping
2018-03-01
A novel overlapping long-period fiber grating (OLPFG) is proposed and experimentally demonstrated in this paper. The OLPFG is composed of two partially overlapping long-period fiber gratings (LPFG). Based on the coupled model theory and transfer matrix method, it is found that the phase shift LPFG and LPFGs interference are two special situations of the proposed OLPFG. Moreover, the confirmation experiments verified that the proposed OLPFG has a high bending sensitivity in opposite directions, and the temperature crosstalk can be compensated spontaneously.
Between-object and within-object saccade programming in a visual search task.
Vergilino-Perez, Dorine; Findlay, John M
2006-07-01
The role of the perceptual organization of the visual display on eye movement control was examined in two experiments using a task where a two-saccade sequence was directed toward either a single elongated object or three separate shorter objects. In the first experiment, we examined the consequences for the second saccade of a small displacement of the whole display during the first saccade. We found that between-object saccades compensated for the displacement to aim for a target position on the new object whereas within-object saccades did not show compensation but were coded as a fixed motor vector applied irrespective of wherever the preceding saccade landed. In the second experiment, we extended the paradigm to examine saccades performed in different directions. The results suggest that the within-object and between-object saccade distinction is an essential feature of saccadic planning.
LBP and SIFT based facial expression recognition
NASA Astrophysics Data System (ADS)
Sumer, Omer; Gunes, Ece O.
2015-02-01
This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.
Invariants of polarization transformations.
Sadjadi, Firooz A
2007-05-20
The use of polarization-sensitive sensors is being explored in a variety of applications. Polarization diversity has been shown to improve the performance of the automatic target detection and recognition in a significant way. However, it also brings out the problems associated with processing and storing more data and the problem of polarization distortion during transmission. We present a technique for extracting attributes that are invariant under polarization transformations. The polarimetric signatures are represented in terms of the components of the Stokes vectors. Invariant algebra is then used to extract a set of signature-related attributes that are invariant under linear transformation of the Stokes vectors. Experimental results using polarimetric infrared signatures of a number of manmade and natural objects undergoing systematic linear transformations support the invariancy of these attributes.
Flyby Error Analysis Based on Contour Plots for the Cassini Tour
NASA Technical Reports Server (NTRS)
Stumpf, P. W.; Gist, E. M.; Goodson, T. D.; Hahn, Y.; Wagner, S. V.; Williams, P. N.
2008-01-01
The maneuver cancellation analysis consists of cost contour plots employed by the Cassini maneuver team. The plots are two-dimensional linear representations of a larger six-dimensional solution to a multi-maneuver, multi-encounter mission at Saturn. By using contours plotted with the dot product of vectors B and R and the dot product of vectors B and T components, it is possible to view the effects delta V on for various encounter positions in the B-plane. The plot is used in operations to help determine if the Approach Maneuver (ensuing encounter minus three days) and/or the Cleanup Maneuver (ensuing encounter plus three days) can be cancelled and also is a linear check of an integrated solution.
Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine
NASA Astrophysics Data System (ADS)
Santoso, Noviyanti; Wibowo, Wahyu
2018-03-01
A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
Coherent detection in optical fiber systems.
Ip, Ezra; Lau, Alan Pak Tao; Barros, Daniel J F; Kahn, Joseph M
2008-01-21
The drive for higher performance in optical fiber systems has renewed interest in coherent detection. We review detection methods, including noncoherent, differentially coherent, and coherent detection, as well as a hybrid method. We compare modulation methods encoding information in various degrees of freedom (DOF). Polarization-multiplexed quadrature-amplitude modulation maximizes spectral efficiency and power efficiency, by utilizing all four available DOF, the two field quadratures in the two polarizations. Dual-polarization homodyne or heterodyne downconversion are linear processes that can fully recover the received signal field in these four DOF. When downconverted signals are sampled at the Nyquist rate, compensation of transmission impairments can be performed using digital signal processing (DSP). Linear impairments, including chromatic dispersion and polarization-mode dispersion, can be compensated quasi-exactly using finite impulse response filters. Some nonlinear impairments, such as intra-channel four-wave mixing and nonlinear phase noise, can be compensated partially. Carrier phase recovery can be performed using feedforward methods, even when phase-locked loops may fail due to delay constraints. DSP-based compensation enables a receiver to adapt to time-varying impairments, and facilitates use of advanced forward-error-correction codes. We discuss both single- and multi-carrier system implementations. For a given modulation format, using coherent detection, they offer fundamentally the same spectral efficiency and power efficiency, but may differ in practice, because of different impairments and implementation details. With anticipated advances in analog-to-digital converters and integrated circuit technology, DSP-based coherent receivers at bit rates up to 100 Gbit/s should become practical within the next few years.
NASA Astrophysics Data System (ADS)
Huebner, Claudia S.
2016-10-01
As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).
NASA Astrophysics Data System (ADS)
Gavazzi, Bruno; Alkhatib-Alkontar, Rozan; Munschy, Marc; Colin, Frédéric; Duvette, Catherine
2016-04-01
Fluxgate 3-components magnetometers allow vector measurements of the magnetic field. Moreover, they are the magnetometers measuring the intensity of the magnetic field with the lightest weight and the lowest power consumption. Vector measurements make them the only kind of magnetometer allowing compensation of magnetic perturbations due to the equipment carried with the magnetometer. Fluxgate 3-components magnetometers are common in space magnetometry and in aero-geophysics but are never used in archaeology due to the difficulty to calibrate them. This problem is overcome by the use of a simple calibration and compensation procedure on the field developed initially for space research (after calibration and compensation, rms noise is less than 1 nT). It is therefore possible to build a multi-sensor (up to 8) and georeferenced device for investigations at different scales down to the centimetre: because the locus of magnetic measurements is less than a cubic centimetre, magnetic profiling or mapping can be performed a few centimetres outside magnetized bodies. Such an equipment is used in a context of heavy sediment coverage and uneven topography on the 1st millennium BC site of Qasr ʿAllam in the western desert of Egypt. Magnetic measurements with a line spacing of 0.5 m allow to compute a magnetic grid. Interpretation using potential field operators such as double reduction to the pole and fractional vertical derivatives reveals a widespread irrigation system and a vast cultic facility. In some areas, magnetic profiling with a 0.1 m line spacing and at 0.1 m above the ground is performed. Results of interpretations give enough proof to the local authorities to enlarge the protection of the site against the threatening progression of agricultural fields.
Etiology of work-related electrical injuries: a narrative analysis of workers' compensation claims.
Lombardi, David A; Matz, Simon; Brennan, Melanye J; Smith, Gordon S; Courtney, Theodore K
2009-10-01
The purpose of this study was to provide new insight into the etiology of primarily nonfatal, work-related electrical injuries. We developed a multistage, case-selection algorithm to identify electrical-related injuries from workers' compensation claims and a customized coding taxonomy to identify pre-injury circumstances. Workers' compensation claims routinely collected over a 1-year period from a large U.S. insurance provider were used to identify electrical-related injuries using an algorithm that evaluated: coded injury cause information, nature of injury, "accident" description, and injury description narratives. Concurrently, a customized coding taxonomy for these narratives was developed to abstract the activity, source, initiating process, mechanism, vector, and voltage. Among the 586,567 reported claims during 2002, electrical-related injuries accounted for 1283 (0.22%) of nonfatal claims and 15 fatalities (1.2% of electrical). Most (72.3%) were male, average age of 36, working in services (33.4%), manufacturing (24.7%), retail trade (17.3%), and construction (7.2%). Body part(s) injured most often were the hands, fingers, or wrist (34.9%); multiple body parts/systems (25.0%); lower/upper arm; elbow; shoulder, and upper extremities (19.2%). The leading activities were conducting manual tasks (55.1%); working with machinery, appliances, or equipment; working with electrical wire; and operating powered or nonpowered hand tools. Primary injury sources were appliances and office equipment (24.4%); wires, cables/cords (18.0%); machines and other equipment (11.8%); fixtures, bulbs, and switches (10.4%); and lightning (4.3%). No vector was identified in 85% of cases. and the work process was initiated by others in less than 1% of cases. Injury narratives provide valuable information to overcome some of the limitations of precoded data, more specially for identifying additional injury cases and in supplementing traditional epidemiologic data for further understanding the etiology of work-related electrical injuries that may lead to further prevention opportunities.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline simulations with time delay compensation show that both novel predictors effectively suppress the large spikes caused by the McFarland compensator. The phase errors of the three predictors are not significant. The adaptive predictor yields greater gain errors than the McFarland predictor for short delays (96 and 138 ms), but shows smaller errors for long delays (186 and 282 ms). The advantage of the adaptive predictor becomes more obvious for a longer time delay. Conversely, the state space predictor results in substantially smaller gain error than the other two predictors for all four delay cases.