Advanced analysis technique for the evaluation of linear alternators and linear motors
NASA Technical Reports Server (NTRS)
Holliday, Jeffrey C.
1995-01-01
A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.
Li, Xi; Ke, Chongwei
2015-05-01
The esophageal jejunum anastomosis of the digestive tract reconstruction techniques in laparoscopic total gastrectomy includes two categories: circular stapler anastomosis techniques and linear stapler anastomosis techniques. Circular stapler anastomosis techniques include manual anastomosis method, purse string instrument method, Hiki improved special anvil anastomosis technique, the transorally inserted anvil(OrVil(TM)) and reverse puncture device technique. Linear stapler anastomosis techniques include side to side anastomosis technique and Overlap side to side anastomosis technique. Esophageal jejunum anastomosis technique has a wide selection of different technologies with different strengths and the corresponding limitations. This article will introduce research progress of laparoscopic total gastrectomy esophagus jejunum anastomosis from both sides of the development of anastomosis technology and the selection of anastomosis technology.
ERIC Educational Resources Information Center
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
A technique using a nonlinear helicopter model for determining trims and derivatives
NASA Technical Reports Server (NTRS)
Ostroff, A. J.; Downing, D. R.; Rood, W. J.
1976-01-01
A technique is described for determining the trims and quasi-static derivatives of a flight vehicle for use in a linear perturbation model; both the coupled and uncoupled forms of the linear perturbation model are included. Since this technique requires a nonlinear vehicle model, detailed equations with constants and nonlinear functions for the CH-47B tandem rotor helicopter are presented. Tables of trims and derivatives are included for airspeeds between -40 and 160 knots and rates of descent between + or - 10.16 m/sec (+ or - 200 ft/min). As a verification, the calculated and referenced values of comparable trims, derivatives, and linear model poles are shown to have acceptable agreement.
NASA Technical Reports Server (NTRS)
Dieudonne, J. E.
1978-01-01
A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.
Numerical solution methods for viscoelastic orthotropic materials
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1988-01-01
Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.
Shokouhi, Parisa; Rivière, Jacques; Lake, Colton R; Le Bas, Pierre-Yves; Ulrich, T J
2017-11-01
The use of nonlinear acoustic techniques in solids consists in measuring wave distortion arising from compliant features such as cracks, soft intergrain bonds and dislocations. As such, they provide very powerful nondestructive tools to monitor the onset of damage within materials. In particular, a recent technique called dynamic acousto-elasticity testing (DAET) gives unprecedented details on the nonlinear elastic response of materials (classical and non-classical nonlinear features including hysteresis, transient elastic softening and slow relaxation). Here, we provide a comprehensive set of linear and nonlinear acoustic responses on two prismatic concrete specimens; one intact and one pre-compressed to about 70% of its ultimate strength. The two linear techniques used are Ultrasonic Pulse Velocity (UPV) and Resonance Ultrasound Spectroscopy (RUS), while the nonlinear ones include DAET (fast and slow dynamics) as well as Nonlinear Resonance Ultrasound Spectroscopy (NRUS). In addition, the DAET results correspond to a configuration where the (incoherent) coda portion of the ultrasonic record is used to probe the samples, as opposed to a (coherent) first arrival wave in standard DAET tests. We find that the two visually identical specimens are indistinguishable based on parameters measured by linear techniques (UPV and RUS). On the contrary, the extracted nonlinear parameters from NRUS and DAET are consistent and orders of magnitude greater for the damaged specimen than those for the intact one. This compiled set of linear and nonlinear ultrasonic testing data including the most advanced technique (DAET) provides a benchmark comparison for their use in the field of material characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
SUBOPT: A CAD program for suboptimal linear regulators
NASA Technical Reports Server (NTRS)
Fleming, P. J.
1985-01-01
An interactive software package which provides design solutions for both standard linear quadratic regulator (LQR) and suboptimal linear regulator problems is described. Intended for time-invariant continuous systems, the package is easily modified to include sampled-data systems. LQR designs are obtained by established techniques while the large class of suboptimal problems containing controller and/or performance index options is solved using a robust gradient minimization technique. Numerical examples demonstrate features of the package and recent developments are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phinney, N.
The SLAC Linear Collider (SLC) is the first example of an entirely new type of lepton collider. Many years of effort were required to develop the understanding and techniques needed to approach design luminosity. This paper discusses some of the key issues and problems encountered in producing a working linear collider. These include the polarized source, techniques for emittance preservation, extensive feedback systems, and refinements in beam optimization in the final focus. The SLC experience has been invaluable for testing concepts and developing designs for a future linear collider.
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control.
Hahne, J M; Biessmann, F; Jiang, N; Rehbaum, H; Farina, D; Meinecke, F C; Muller, K-R; Parra, L C
2014-03-01
In recent years the number of active controllable joints in electrically powered hand-prostheses has increased significantly. However, the control strategies for these devices in current clinical use are inadequate as they require separate and sequential control of each degree-of-freedom (DoF). In this study we systematically compare linear and nonlinear regression techniques for an independent, simultaneous and proportional myoelectric control of wrist movements with two DoF. These techniques include linear regression, mixture of linear experts (ME), multilayer-perceptron, and kernel ridge regression (KRR). They are investigated offline with electro-myographic signals acquired from ten able-bodied subjects and one person with congenital upper limb deficiency. The control accuracy is reported as a function of the number of electrodes and the amount and diversity of training data providing guidance for the requirements in clinical practice. The results showed that KRR, a nonparametric statistical learning method, outperformed the other methods. However, simple transformations in the feature space could linearize the problem, so that linear models could achieve similar performance as KRR at much lower computational costs. Especially ME, a physiologically inspired extension of linear regression represents a promising candidate for the next generation of prosthetic devices.
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan
2016-01-01
In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.
A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Watts, Stephen R.
1995-01-01
This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.
A Whirlwind Tour of Computational Geometry.
ERIC Educational Resources Information Center
Graham, Ron; Yao, Frances
1990-01-01
Described is computational geometry which used concepts and results from classical geometry, topology, combinatorics, as well as standard algorithmic techniques such as sorting and searching, graph manipulations, and linear programing. Also included are special techniques and paradigms. (KR)
Calon, Tim G A; van Hoof, Marc; van den Berge, Herbert; de Bruijn, Arthur J G; van Tongeren, Joost; Hof, Janny R; Brunings, Jan Wouter; Jonhede, Sofia; Anteunis, Lucien J C; Janssen, Miranda; Joore, Manuela A; Holmberg, Marcus; Johansson, Martin L; Stokroos, Robert J
2016-11-09
Over the last years, less invasive surgical techniques with soft tissue preservation for bone conduction hearing implants (BCHI) have been introduced such as the linear incision technique combined with a punch. Results using this technique seem favorable in terms of rate of peri-abutment dermatitis (PAD), esthetics, and preservation of skin sensibility. Recently, a new standardized surgical technique for BCHI placement, the Minimally Invasive Ponto Surgery (MIPS) technique has been developed by Oticon Medical AB (Askim, Sweden). This technique aims to standardize surgery by using a novel surgical instrumentation kit and minimize soft tissue trauma. A multicenter randomized controlled trial is designed to compare the MIPS technique to the linear incision technique with soft tissue preservation. The primary investigation center is Maastricht University Medical Center. Sixty-two participants will be included with a 2-year follow-up period. Parameters are introduced to quantify factors such as loss of skin sensibility, dehiscence of the skin next to the abutment, skin overgrowth, and cosmetic results. A new type of sampling method is incorporated to aid in the estimation of complications. To gain further understanding of PAD, swabs and skin biopsies are collected during follow-up visits for evaluation of the bacterial profile and inflammatory cytokine expression. The primary objective of the study is to compare the incidence of PAD during the first 3 months after BCHI placement. Secondary objectives include the assessment of parameters related to surgery, wound healing, pain, loss of sensibility of the skin around the implant, implant extrusion rate, implant stability measurements, dehiscence of the skin next to the abutment, and esthetic appeal. Tertiary objectives include assessment of other factors related to PAD and a health economic evaluation. This is the first trial to compare the recently developed MIPS technique to the linear incision technique with soft tissue preservation for BCHI surgery. Newly introduced parameters and sampling method will aid in the prediction of results and complications after BCHI placement. Registered at the CCMO register in the Netherlands on 24 November 2014: NL50072.068.14 . Retrospectively registered on 21 April 2015 at ClinicalTrials.gov: NCT02438618 . This trial is sponsored by Oticon Medical AB.
The double stapling technique for low anterior resection. Results, modifications, and observations.
Griffen, F D; Knight, C D; Whitaker, J M; Knight, C D
1990-01-01
Since the introduction of the end-to-end anastomosis (EEA) stapler for rectal reconstruction, we have used a modification of the conventional technique in which the lower rectal segment is closed with the linear stapler (TA-55) and the anastomosis is performed using the EEA instrument across the linear staple line (double stapling technique). Our experience with this procedure includes stapled colorectal anastomoses in 75 patients and is the basis for the report. This review presents the details and advantages of the technique and the results. Complications include two patients with anastomotic leak (2.7%), and two with stenosis that required treatment (2.7%). Protective colostomy was not done in this series. There were no deaths. Our experience and that of others suggests that this modification of the EEA technique can allow a lower anastomosis in some patients, and that it can be done with greater safety and facility. Images Fig. 1. Fig. 2. Fig. 3. Fig. 4. Fig. 5. Fig. 6. PMID:2357137
NASA Technical Reports Server (NTRS)
Macdonald, H.; Waite, W.; Elachi, C.; Babcock, R.; Konig, R.; Gattis, J.; Borengasser, M.; Tolman, D.
1980-01-01
Imaging radar was evaluated as an adjunct to conventional petroleum exploration techniques, especially linear mapping. Linear features were mapped from several remote sensor data sources including stereo photography, enhanced LANDSAT imagery, SLAR radar imagery, enhanced SAR radar imagery, and SAR radar/LANDSAT combinations. Linear feature maps were compared with surface joint data, subsurface and geophysical data, and gas production in the Arkansas part of the Arkoma basin. The best LANDSAT enhanced product for linear detection was found to be a winter scene, band 7, uniform distribution stretch. Of the individual SAR data products, the VH (cross polarized) SAR radar mosaic provides for detection of most linears; however, none of the SAR enhancements is significantly better than the others. Radar/LANDSAT merges may provide better linear detection than a single sensor mapping mode, but because of operator variability, the results are inconclusive. Radar/LANDSAT combinations appear promising as an optimum linear mapping technique, if the advantages and disadvantages of each remote sensor are considered.
GVE-Based Dynamics and Control for Formation Flying Spacecraft
NASA Technical Reports Server (NTRS)
Breger, Louis; How, Jonathan P.
2004-01-01
Formation flying is an enabling technology for many future space missions. This paper presents extensions to the equations of relative motion expressed in Keplerian orbital elements, including new initialization techniques for general formation configurations. A new linear time-varying form of the equations of relative motion is developed from Gauss Variational Equations and used in a model predictive controller. The linearizing assumptions for these equations are shown to be consistent with typical formation flying scenarios. Several linear, convex initialization techniques are presented, as well as a general, decentralized method for coordinating a tetrahedral formation using differential orbital elements. Control methods are validated using a commercial numerical propagator.
Analysis technique for controlling system wavefront error with active/adaptive optics
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
NASA Astrophysics Data System (ADS)
Kaplan, Melike; Hosseini, Kamyar; Samadani, Farzan; Raza, Nauman
2018-07-01
A wide range of problems in different fields of the applied sciences especially non-linear optics is described by non-linear Schrödinger's equations (NLSEs). In the present paper, a specific type of NLSEs known as the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term has been studied. The generalized Kudryashov method along with symbolic computation package has been exerted to carry out this objective. As a consequence, a series of optical soliton solutions have formally been retrieved. It is corroborated that the generalized form of Kudryashov method is a direct, effectual, and reliable technique to deal with various types of non-linear Schrödinger's equations.
NASA Astrophysics Data System (ADS)
Kuznetsov, N.; Maz'ya, V.; Vainberg, B.
2002-08-01
This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
NASA Technical Reports Server (NTRS)
Clark, William S.; Hall, Kenneth C.
1994-01-01
A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.
Ohashi, Manabu; Hiki, Naoki; Ida, Satoshi; Kumagai, Koshi; Nunobe, Souya; Sano, Takeshi
2018-05-21
Delta-shaped anastomosis is usually applied for an intracorporeal gastrogastrostomy in totally laparoscopic pylorus-preserving gastrectomy (TLPPG). However, the remnant stomach is slightly twisted around the anastomosis because it connects in side-to-side fashion. To realize an intracorporeal end-to-end gastrogastrostomy using an endoscopic linear stapler, we invented a novel method including a unique anastomotic technique. In this new approach, we first made small gastrotomies at the greater and lesser curvatures of the transected antrum and then pierced it using an endoscopic linear stapler. After the pierced antrum and the proximal remnant stomach were mechanically connected, the gastrotomies and stapling lines were transected using an endoscopic linear stapler, creating an intracorporeal end-to-end gastrogastrostomy. We have named this technique the "piercing method" because piercing the stomach is essential to its implementation. Between October 2015 and June 2017, 26 patients who had clinically early gastric cancer at the middle third of the stomach without clinical evidence of lymph node metastasis underwent TLPPG involving the novel method. The 26 patients successfully underwent an intracorporeal mechanical end-to-end gastrogastrostomy by the piercing method. The median operation time of the 26 patients was 272 min (range 209-357 min). With the exception of one gastric stasis, no problems associated with the piercing method were encountered during and after surgery. The piercing method can safely create an intracorporeal mechanical end-to-end gastrogastrostomy in TLPPG. Piercing the stomach using an endoscopic linear stapler is a new technique for gastrointestinal anastomosis. This method should be considered if the surgical aim is creation of an intracorporeal end-to-end gastrogastrostomy in TLPPG.
Conjoint Analysis: A Study of the Effects of Using Person Variables.
ERIC Educational Resources Information Center
Fraas, John W.; Newman, Isadore
Three statistical techniques--conjoint analysis, a multiple linear regression model, and a multiple linear regression model with a surrogate person variable--were used to estimate the relative importance of five university attributes for students in the process of selecting a college. The five attributes include: availability and variety of…
Analysis of periodically excited non-linear systems by a parametric continuation technique
NASA Astrophysics Data System (ADS)
Padmanabhan, C.; Singh, R.
1995-07-01
The dynamic behavior and frequency response of harmonically excited piecewise linear and/or non-linear systems has been the subject of several recent investigations. Most of the prior studies employed harmonic balance or Galerkin schemes, piecewise linear techniques, analog simulation and/or direct numerical integration (digital simulation). Such techniques are somewhat limited in their ability to predict all of the dynamic characteristics, including bifurcations leading to the occurrence of unstable, subharmonic, quasi-periodic and/or chaotic solutions. To overcome this problem, a parametric continuation scheme, based on the shooting method, is applied specifically to a periodically excited piecewise linear/non-linear system, in order to improve understanding as well as to obtain the complete dynamic response. Parameter regions exhibiting bifurcations to harmonic, subharmonic or quasi-periodic solutions are obtained quite efficiently and systematically. Unlike other techniques, the proposed scheme can follow period-doubling bifurcations, and with some modifications obtain stable quasi-periodic solutions and their bifurcations. This knowledge is essential in establishing conditions for the occurrence of chaotic oscillations in any non-linear system. The method is first validated through the Duffing oscillator example, the solutions to which are also obtained by conventional one-term harmonic balance and perturbation methods. The second example deals with a clearance non-linearity problem for both harmonic and periodic excitations. Predictions from the proposed scheme match well with available analog simulation data as well as with multi-term harmonic balance results. Potential savings in computational time over direct numerical integration is demonstrated for some of the example cases. Also, this work has filled in some of the solution regimes for an impact pair, which were missed previously in the literature. Finally, one main limitation associated with the proposed procedure is discussed.
Exploring the CAESAR database using dimensionality reduction techniques
NASA Astrophysics Data System (ADS)
Mendoza-Schrock, Olga; Raymer, Michael L.
2012-06-01
The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.
Free-piston engine linear generator for hybrid vehicles modeling study
NASA Astrophysics Data System (ADS)
Callahan, T. J.; Ingram, S. K.
1995-05-01
Development of a free piston engine linear generator was investigated for use as an auxiliary power unit for a hybrid electric vehicle. The main focus of the program was to develop an efficient linear generator concept to convert the piston motion directly into electrical power. Computer modeling techniques were used to evaluate five different designs for linear generators. These designs included permanent magnet generators, reluctance generators, linear DC generators, and two and three-coil induction generators. The efficiency of the linear generator was highly dependent on the design concept. The two-coil induction generator was determined to be the best design, with an efficiency of approximately 90 percent.
Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan
2016-01-01
In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740
NASA Technical Reports Server (NTRS)
Magnus, Alfred E.; Epton, Michael A.
1981-01-01
An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the PAN AIR (Panel Aerodynamics) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformations, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments.
The Linear Programming to evaluate the performance of Oral Health in Primary Care.
Colussi, Claudia Flemming; Calvo, Maria Cristina Marino; Freitas, Sergio Fernando Torres de
2013-01-01
To show the use of Linear Programming to evaluate the performance of Oral Health in Primary Care. This study used data from 19 municipalities of Santa Catarina city that participated of the state evaluation in 2009 and have more than 50,000 habitants. A total of 40 indicators were evaluated, calculated using the Microsoft Excel 2007, and converted to the interval [0, 1] in ascending order (one indicating the best situation and zero indicating the worst situation). Applying the Linear Programming technique municipalities were assessed and compared among them according to performance curve named "quality estimated frontier". Municipalities included in the frontier were classified as excellent. Indicators were gathered, and became synthetic indicators. The majority of municipalities not included in the quality frontier (values different of 1.0) had lower values than 0.5, indicating poor performance. The model applied to the municipalities of Santa Catarina city assessed municipal management and local priorities rather than the goals imposed by pre-defined parameters. In the final analysis three municipalities were included in the "perceived quality frontier". The Linear Programming technique allowed to identify gaps that must be addressed by city managers to enhance actions taken. It also enabled to observe each municipal performance and compare results among similar municipalities.
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
The Computer in Educational Decision Making. An Introduction and Guide for School Administrators.
ERIC Educational Resources Information Center
Sanders, Susan; And Others
This text provides educational administrators with a working knowledge of the problem-solving techniques of PERT (planning, evaluation, and review technique), Linear Programming, Queueing Theory, and Simulation. The text includes an introduction to decision-making and operations research, four chapters consisting of indepth explanations of each…
Making an Old Measurement Experiment Modern and Exciting!
ERIC Educational Resources Information Center
Schulze, Paul D.
1996-01-01
Presents a new approach for the determination of the temperature coefficient of resistance of a resistor and a thermistor. Advantages include teaching students how to linearize data in order to utilize least-squares techniques, continuously taking data over desired temperature range, using up-to-date data-acquisition techniques, teaching the use…
Analytical aids in land management planning
David R. Betters
1978-01-01
Quantitative techniques may be applied to aid in completing various phases of land management planning. Analytical procedures which have been used include a procedure for public involvement, PUBLIC; a matrix information generator, MAGE5; an allocation procedure, linear programming (LP); and an input-output economic analysis (EA). These techniques have proven useful in...
A General Linear Model Approach to Adjusting the Cumulative GPA.
ERIC Educational Resources Information Center
Young, John W.
A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…
SIMD Optimization of Linear Expressions for Programmable Graphics Hardware
Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang
2009-01-01
The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569
NASA Astrophysics Data System (ADS)
Manzanares, Carlos; Diaz, Marlon; Barton, Ann; Nyaupane, Parashu R.
2017-06-01
The thermal lens technique is applied to vibrational overtone spectroscopy of solutions of naphthalene in n-hexane. The pump and probe thermal lens technique is found to be very sensitive for detecting samples of low composition (ppm) in transparent solvents. In this experiment two different probe lasers: one at 488 nm and another 568 nm were used. The C-H fifth vibrational overtone spectrum of benzene is detected at room temperature for different concentrations. A plot of normalized integrated intensity as a function of concentration of naphthalene in solution reveals a non-linear behavior at low concentrations when using the 488 nm probe and a linear behavior over the entire range of concentrations when using the 568 nm probe. The non-linearity cannot be explained assuming solvent enhancement at low concentrations. A two color absorption model that includes the simultaneous absorption of the pump and probe lasers could explain the enhanced magnitude and the non-linear behavior of the thermal lens signal. Other possible mechanisms will also be discussed.
The successes and future prospects of the linear antisense RNA amplification methodology.
Li, Jifen; Eberwine, James
2018-05-01
It has been over a quarter of a century since the introduction of the linear RNA amplification methodology known as antisense RNA (aRNA) amplification. Whereas most molecular biology techniques are rapidly replaced owing to the fast-moving nature of development in the field, the aRNA procedure has become a base that can be built upon through varied uses of the technology. The technique was originally developed to assess RNA populations from small amounts of starting material, including single cells, but over time its use has evolved to include the detection of various cellular entities such as proteins, RNA-binding-protein-associated cargoes, and genomic DNA. In this Perspective we detail the linear aRNA amplification procedure and its use in assessing various components of a cell's chemical phenotype. This procedure is particularly useful in efforts to multiplex the simultaneous detection of various cellular processes. These efforts are necessary to identify the quantitative chemical phenotype of cells that underlies cellular function.
NASA Astrophysics Data System (ADS)
Hosseini, K.; Ayati, Z.; Ansari, R.
2018-04-01
One specific class of non-linear evolution equations, known as the Tzitzéica-type equations, has received great attention from a group of researchers involved in non-linear science. In this article, new exact solutions of the Tzitzéica-type equations arising in non-linear optics, including the Tzitzéica, Dodd-Bullough-Mikhailov and Tzitzéica-Dodd-Bullough equations, are obtained using the expa function method. The integration technique actually suggests a useful and reliable method to extract new exact solutions of a wide range of non-linear evolution equations.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Ismail, Syed
2014-01-01
Global atmospheric carbon dioxide (CO2) measurements through the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) Decadal Survey recommended space mission are critical for improving our understanding of CO2 sources and sinks. IM-CW (Intensity Modulated Continuous Wave) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS science requirements. In previous laboratory and flight experiments we have successfully used linear swept frequency modulation to discriminate surface lidar returns from intermediate aerosol and cloud contamination. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate clouds, which is a requirement for the inversion of the CO2 column-mixing ratio from the instrument optical depth measurements, has been demonstrated with the linear swept frequency modulation technique. We are concurrently investigating advanced techniques to help improve the auto-correlation properties of the transmitted waveform implemented through physical hardware to make cloud rejection more robust in special restricted scenarios. Several different carrier based modulation techniques are compared including orthogonal linear swept, orthogonal non-linear swept, and Binary Phase Shift Keying (BPSK). Techniques are investigated that reduce or eliminate sidelobes. These techniques have excellent auto-correlation properties while possessing a finite bandwidth (by way of a new cyclic digital filter), which will reduce bias error in the presence of multiple scatterers. Our analyses show that the studied modulation techniques can increase the accuracy of CO2 column measurements from space. A comparison of various properties such as signal to noise ratio (SNR) and time-bandwidth product are discussed.
Discovering Authorities and Hubs in Different Topological Web Graph Structures.
ERIC Educational Resources Information Center
Meghabghab, George
2002-01-01
Discussion of citation analysis on the Web considers Web hyperlinks as a source to analyze citations. Topics include basic graph theory applied to Web pages, including matrices, linear algebra, and Web topology; and hubs and authorities, including a search technique called HITS (Hyperlink Induced Topic Search). (Author/LRW)
Quasi-linear theory via the cumulant expansion approach
NASA Technical Reports Server (NTRS)
Jones, F. C.; Birmingham, T. J.
1974-01-01
The cumulant expansion technique of Kubo was used to derive an intergro-differential equation for f , the average one particle distribution function for particles being accelerated by electric and magnetic fluctuations of a general nature. For a very restricted class of fluctuations, the f equation degenerates exactly to a differential equation of Fokker-Planck type. Quasi-linear theory, including the adiabatic assumption, is an exact theory for this limited class of fluctuations. For more physically realistic fluctuations, however, quasi-linear theory is at best approximate.
Using nonlinear quantile regression to estimate the self-thinning boundary curve
Quang V. Cao; Thomas J. Dean
2015-01-01
The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...
ERIC Educational Resources Information Center
Ishitani, Terry T.
2010-01-01
This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who…
Calibration Experiments for a Computer Vision Oyster Volume Estimation System
ERIC Educational Resources Information Center
Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.
2009-01-01
Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…
Understanding a Normal Distribution of Data (Part 2).
Maltenfort, Mitchell
2016-02-01
Completing the discussion of data normality, advanced techniques for analysis of non-normal data are discussed including data transformation, Generalized Linear Modeling, and bootstrapping. Relative strengths and weaknesses of each technique are helpful in choosing a strategy, but help from a statistician is usually necessary to analyze non-normal data using these methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov
2016-06-15
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less
Hager, Robert; Yoon, E. S.; Ku, S.; ...
2016-04-04
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less
Bounding solutions of geometrically nonlinear viscoelastic problems
NASA Technical Reports Server (NTRS)
Stubstad, J. M.; Simitses, G. J.
1985-01-01
Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.
Bounding solutions of geometrically nonlinear viscoelastic problems
NASA Technical Reports Server (NTRS)
Stubstad, J. M.; Simitses, G. J.
1986-01-01
Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.
PREFACE: The 6th International Symposium on Measurement Techniques for Multiphase Flows
NASA Astrophysics Data System (ADS)
Okamoto, Koji; Murai, Yuichi
2009-02-01
Research on multi-phase flows is very important for industrial applications, including power stations, vehicles, engines, food processing, and so on. Also, from the environmental viewpoint, multi-phase flows need to be investigated to overcome global warming. Multi-phase flows originally have non-linear features because they are multi-phased. The interaction between the phases plays a very interesting role in the flows. The non-linear interaction causes the multi-phase flows to be very difficult to understand phenomena. The International Symposium on Measurement Techniques for Multi-phase Flows (ISMTMF) is a unique symposium. The target of the symposium is to exchange the state-of-the-art knowledge on the measurement techniques for non-linear multi-phase flows. Measurement technique is the key technology to understanding non-linear phenomena. The ISMTMF began in 1995 in Nanjing, China. The symposium has continuously been held every two or three years. The ISMTMF-2008 was held in Okinawa, Japan as the 6th symposium of ISMTMF on 15-17 December 2008. Okinawa has a long history as the Ryukyus Kingdom. China and Japan have had cultural and economic exchanges through Okinawa for more than 1000 years. Please enjoy Okinawa and experience its history to enhance our international communication. The present symposium was attended by 124 participants, the program included 107 contributions with 5 plenary lectures, 2 keynote lectures, and 100 oral regular paper presentations. The topics include, besides the ordinary measurement techniques for multiphase flows, acoustic and electric sensors, bubbles and microbubbles, computed tomography, gas-liquid interface, laser-imaging and PIV, oil/coal/drop and spray, solid and powder, spectral and multi-physics. This volume includes the presented papers at ISMTMF-2008. In addition to this volume, ten selected papers will be published in a special issue of Measurement Science and Technology. We would like to express special thanks to all the participants and the contributors to the symposium, and also to the supporting organizations; The Japanese Society for Multiphase Flow, The Chinese Society for Measurement, National Natural Science Foundation of China, The Chinese Academy of Science, and University of the Ryukyus, Okinawa, Japan. Koji Okamoto Chair of 6th ISMTMF and proceedings editor The University of Tokyo, Japan Yuichi Murai Proceedings co-editor Hokkaido University, Japan
Chandrasekhar equations for infinite dimensional systems
NASA Technical Reports Server (NTRS)
Ito, K.; Powers, R. K.
1985-01-01
Chandrasekhar equations are derived for linear time invariant systems defined on Hilbert spaces using a functional analytic technique. An important consequence of this is that the solution to the evolutional Riccati equation is strongly differentiable in time and one can define a strong solution of the Riccati differential equation. A detailed discussion on the linear quadratic optimal control problem for hereditary differential systems is also included.
Unsteady transonic flows - Introduction, current trends, applications
NASA Technical Reports Server (NTRS)
Yates, E. C., Jr.
1985-01-01
The computational treatment of unsteady transonic flows is discussed, reviewing the historical development and current techniques. The fundamental physical principles are outlined; the governing equations are introduced; three-dimensional linearized and two-dimensional linear-perturbation theories in frequency domain are described in detail; and consideration is given to frequency-domain FEMs and time-domain finite-difference and integral-equation methods. Extensive graphs and diagrams are included.
Linear signatures in nonlinear gyrokinetics: interpreting turbulence with pseudospectra
Hatch, D. R.; Jenko, F.; Navarro, A. Banon; ...
2016-07-26
A notable feature of plasma turbulence is its propensity to retain features of the underlying linear eigenmodes in a strongly turbulent state—a property that can be exploited to predict various aspects of the turbulence using only linear information. In this context, this work examines gradient-driven gyrokinetic plasma turbulence through three lenses—linear eigenvalue spectra, pseudospectra, and singular value decomposition (SVD). We study a reduced gyrokinetic model whose linear eigenvalue spectra include ion temperature gradient driven modes, stable drift waves, and kinetic modes representing Landau damping. The goal is to characterize in which ways, if any, these familiar ingredients are manifest inmore » the nonlinear turbulent state. This pursuit is aided by the use of pseudospectra, which provide a more nuanced view of the linear operator by characterizing its response to perturbations. We introduce a new technique whereby the nonlinearly evolved phase space structures extracted with SVD are linked to the linear operator using concepts motivated by pseudospectra. Using this technique, we identify nonlinear structures that have connections to not only the most unstable eigenmode but also subdominant modes that are nonlinearly excited. The general picture that emerges is a system in which signatures of the linear physics persist in the turbulence, albeit in ways that cannot be fully explained by the linear eigenvalue approach; a non-modal treatment is necessary to understand key features of the turbulence.« less
ASD FieldSpec Calibration Setup and Techniques
NASA Technical Reports Server (NTRS)
Olive, Dan
2001-01-01
This paper describes the Analytical Spectral Devices (ASD) Fieldspec Calibration Setup and Techniques. The topics include: 1) ASD Fieldspec FR Spectroradiometer; 2) Components of Calibration; 3) Equipment list; 4) Spectral Setup; 5) Spectral Calibration; 6) Radiometric and Linearity Setup; 7) Radiometric setup; 8) Datadets Required; 9) Data files; and 10) Field of View Measurement. This paper is in viewgraph form.
Working papers: applicability of Box Jenkins techniques to gasoline consumption forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Reliable consumption forecasts are needed, however, traditional linear time-series techniques don't adequately account for an environment so subject to change. This report evaluates the use of Box Jenkins techniques for gasoline consumption forecasting. Box Jenkins methods were applied to data obtained from the Colorado Petroleum Association and the Colorado Highway Users Fund to ''predict'' 1978 and 1979 consumption. These results prove the Box Jenkins techniques to be quite effective. Forecasts for 1980-81 are included along with suggestions for continuous use of the technique to monitor consumption.
NASA Technical Reports Server (NTRS)
Bekey, G. A.
1971-01-01
Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.
Long-term Evaluation of a Modified Double Staple Technique for Low Anterior Resection.
Illuminati, G; Carboni, F; Ceccanei, G; Pacilè, M A; Pizzardi, G; Palumbo, P; Vietri, F
2014-01-01
When performing low anterior resection for rectal cancer with the double staple technique, -closing the rectum with a linear stapler in the abdomen can be challenging, especially when dealing with a narrow pelvis. For such instances we proposed to modify this technique by pulling the rectal stump through the anus, doing an extra-anal resection of the tumor and linear suture of the rectal stump, before performing a standard, stapled colorectal anastomosis. The purpose of this study was to assess the adequacy of this modification of the double staple technique. Retrospective review of 108 patients undergoing a stapled, low colorectal or coloanal anastomosis, after -eversion, extra-anal resection of the tumor and linear closure of the rectal stump for colorectal cancer, from January 1990 to December 2012. Operative mortality was 0.9%. Fourteen patients (13%) presented early, surgery-related complications -consisting of 7 anastomotic leaks, 5 wound infections, 1 ureteral lesion, and 1 peristomal abscess. Late complications related to surgery included 5 incisional hernias (4.6%), 4 anastomotic strictures (3.7%), 4 neurogenic bladders (3.7%) and 2 fecal incontinences (1.8%). The incidence of local disease recurrence was 10%. Surgical and oncological results validate the proposed modification of the double staple technique, when facing difficulties in suturing the rectum from the abdomen. Copyright© Acta Chirurgica Belgica.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
Loop shaping design for tracking performance in machine axes.
Schinstock, Dale E; Wei, Zhouhong; Yang, Tao
2006-01-01
A modern interpretation of classical loop shaping control design methods is presented in the context of tracking control for linear motor stages. Target applications include noncontacting machines such as laser cutters and markers, water jet cutters, and adhesive applicators. The methods are directly applicable to the common PID controller and are pertinent to many electromechanical servo actuators other than linear motors. In addition to explicit design techniques a PID tuning algorithm stressing the importance of tracking is described. While the theory behind these techniques is not new, the analysis of their application to modern systems is unique in the research literature. The techniques and results should be important to control practitioners optimizing PID controller designs for tracking and in comparing results from classical designs to modern techniques. The methods stress high-gain controller design and interpret what this means for PID. Nothing in the methods presented precludes the addition of feedforward control methods for added improvements in tracking. Laboratory results from a linear motor stage demonstrate that with large open-loop gain very good tracking performance can be achieved. The resultant tracking errors compare very favorably to results from similar motions on similar systems that utilize much more complicated controllers.
Soft tissue strain measurement using an optical method
NASA Astrophysics Data System (ADS)
Toh, Siew Lok; Tay, Cho Jui; Goh, Cho Hong James
2008-11-01
Digital image correlation (DIC) is a non-contact optical technique that allows the full-field estimation of strains on a surface under an applied deformation. In this project, the application of an optimized DIC technique is applied, which can achieve efficiency and accuracy in the measurement of two-dimensional deformation fields in soft tissue. This technique relies on matching the random patterns recorded in images to directly obtain surface displacements and to get displacement gradients from which the strain field can be determined. Digital image correlation is a well developed technique that has numerous and varied engineering applications, including the application in soft and hard tissue biomechanics. Chicken drumstick ligaments were harvested and used during the experiments. The surface of the ligament was speckled with black paint to allow for correlation to be done. Results show that the stress-strain curve exhibits a bi-linear behavior i.e. a "toe region" and a "linear elastic region". The Young's modulus obtained for the toe region is about 92 MPa and the modulus for the linear elastic region is about 230 MPa. The results are within the values for mammalian anterior cruciate ligaments of 150-300 MPa.
Issues in the digital implementation of control compensators. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Moroney, P.
1979-01-01
Techniques developed for the finite-precision implementation of digital filters were used, adapted, and extended for digital feedback compensators, with particular emphasis on steady state, linear-quadratic-Gaussian compensators. Topics covered include: (1) the linear-quadratic-Gaussian problem; (2) compensator structures; (3) architectural issues: serialism, parallelism, and pipelining; (4) finite wordlength effects: quantization noise, quantizing the coefficients, and limit cycles; and (5) the optimization of structures.
Development of a Linear Stirling Model with Varying Heat Inputs
NASA Technical Reports Server (NTRS)
Regan, Timothy F.; Lewandowski, Edward J.
2007-01-01
The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC s non-linear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.
Chaos, Fractals, and Polynomials.
ERIC Educational Resources Information Center
Tylee, J. Louis; Tylee, Thomas B.
1996-01-01
Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spotz, William F.
PyTrilinos is a set of Python interfaces to compiled Trilinos packages. This collection supports serial and parallel dense linear algebra, serial and parallel sparse linear algebra, direct and iterative linear solution techniques, algebraic and multilevel preconditioners, nonlinear solvers and continuation algorithms, eigensolvers and partitioning algorithms. Also included are a variety of related utility functions and classes, including distributed I/O, coloring algorithms and matrix generation. PyTrilinos vector objects are compatible with the popular NumPy Python package. As a Python front end to compiled libraries, PyTrilinos takes advantage of the flexibility and ease of use of Python, and the efficiency of themore » underlying C++, C and Fortran numerical kernels. This paper covers recent, previously unpublished advances in the PyTrilinos package.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callister, Stephen J.; Barry, Richard C.; Adkins, Joshua N.
2006-02-01
Central tendency, linear regression, locally weighted regression, and quantile techniques were investigated for normalization of peptide abundance measurements obtained from high-throughput liquid chromatography-Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR MS). Arbitrary abundances of peptides were obtained from three sample sets, including a standard protein sample, two Deinococcus radiodurans samples taken from different growth phases, and two mouse striatum samples from control and methamphetamine-stressed mice (strain C57BL/6). The selected normalization techniques were evaluated in both the absence and presence of biological variability by estimating extraneous variability prior to and following normalization. Prior to normalization, replicate runs from each sample setmore » were observed to be statistically different, while following normalization replicate runs were no longer statistically different. Although all techniques reduced systematic bias, assigned ranks among the techniques revealed significant trends. For most LC-FTICR MS analyses, linear regression normalization ranked either first or second among the four techniques, suggesting that this technique was more generally suitable for reducing systematic biases.« less
Bonilla, Alfonso; Magri, Carlos; Juan, Eulalia
To compare the punch technique and linear incision with soft tissue reduction for the placement of auditory osseointegrated implants (AOI) and analyze results of osseointegration obtained with the punch technique as measured with the Implant Stability Quotient (ISQ). Case review of 34 patients who received auditory osseointegrated implants between January 2010 and July 2015 and were divided into two groups according to the surgical technique: 18 with the punch technique (PT) and 16 with the linear incision technique (LI). Minimum follow-up was four months (mean: 24 months; range 4-64 months). Included in the analysis were patient profiles and records of the demographic data, surgical indications, surgical technique, implant placement, surgical time, intraoperative complications, as well as postsurgical complications (Holgers classification) and implant stability quotients (ISQ). Use of larger abutments was significantly greater in the PT group (PT, 10mm; LI, 6mm, p<0.001). The PT technique resulted in a shorter procedure than the LI (PT, 20min; LI, 45min, p<0.001). Holgers classification scores identified significantly fewer skin complications one week after surgery for the PT group; however, only small differences were seen between the two groups at the one- and three-month control visits. As shown for our cohort, the punch technique for surgical placement of AOI is faster and presents fewer immediate postoperative complications when compared to the linear incision technique. The clinical application of the ISQ is a useful, easy method to demonstrate the status of osseointegration and, thus, the stability of the device. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.
NASA Technical Reports Server (NTRS)
Carrere, Veronique
1990-01-01
Various image processing techniques developed for enhancement and extraction of linear features, of interest to the structural geologist, from digital remote sensing, geologic, and gravity data, are presented. These techniques include: (1) automatic detection of linear features and construction of rose diagrams from Landsat MSS data; (2) enhancement of principal structural directions using selective filters on Landsat MSS, Spacelab panchromatic, and HCMM NIR data; (3) directional filtering of Spacelab panchromatic data using Fast Fourier Transform; (4) detection of linear/elongated zones of high thermal gradient from thermal infrared data; and (5) extraction of strong gravimetric gradients from digitized Bouguer anomaly maps. Processing results can be compared to each other through the use of a geocoded database to evaluate the structural importance of each lineament according to its depth: superficial structures in the sedimentary cover, or deeper ones affecting the basement. These image processing techniques were successfully applied to achieve a better understanding of the transition between Provence and the Pyrenees structural blocks, in southeastern France, for an improved structural interpretation of the Mediterranean region.
Fiber Bragg grating sensor interrogators on chip: challenges and opportunities
NASA Astrophysics Data System (ADS)
Marin, Yisbel; Nannipieri, Tiziano; Oton, Claudio J.; Di Pasquale, Fabrizio
2017-04-01
In this paper we present an overview of the current efforts towards integration of Fiber Bragg Grating (FBG) sensor interrogators. Different photonic integration platforms will be discussed, including monolithic planar lightwave circuit technology, silicon on insulator (SOI), indium phosphide (InP) and gallium arsenide (GaAs) material platforms. Also various possible techniques for wavelength metering and methods for FBG multiplexing will be discussed and compared in terms of resolution, dynamic performance, multiplexing capabilities and reliability. The use of linear filters, array waveguide gratings (AWG) as multiple linear filters and AWG based centroid signal processing techniques will be addressed as well as interrogation techniques based on tunable micro-ring resonators and Mach-Zehnder interferometers (MZI) for phase sensitive detection. The paper will also discuss the challenges and perspectives of photonic integration to address the increasing requirements of several industrial applications.
Advanced Millimeter-Wave Security Portal Imaging Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheen, David M.; Bernacki, Bruce E.; McMakin, Douglas L.
2012-04-01
Millimeter-wave imaging is rapidly gaining acceptance for passenger screening at airports and other secured facilities. This paper details a number of techniques developed over the last several years including novel image reconstruction and display techniques, polarimetric imaging techniques, array switching schemes, as well as high frequency high bandwidth techniques. Implementation of some of these methods will increase the cost and complexity of the mm-wave security portal imaging systems. RF photonic methods may provide new solutions to the design and development of the sequentially switched linear mm-wave arrays that are the key element in the mm-wave portal imaging systems.
Application of Design Methodologies for Feedback Compensation Associated with Linear Systems
NASA Technical Reports Server (NTRS)
Smith, Monty J.
1996-01-01
The work that follows is concerned with the application of design methodologies for feedback compensation associated with linear systems. In general, the intent is to provide a well behaved closed loop system in terms of stability and robustness (internal signals remain bounded with a certain amount of uncertainty) and simultaneously achieve an acceptable level of performance. The approach here has been to convert the closed loop system and control synthesis problem into the interpolation setting. The interpolation formulation then serves as our mathematical representation of the design process. Lifting techniques have been used to solve the corresponding interpolation and control synthesis problems. Several applications using this multiobjective design methodology have been included to show the effectiveness of these techniques. In particular, the mixed H 2-H performance criteria with algorithm has been used on several examples including an F-18 HARV (High Angle of Attack Research Vehicle) for sensitivity performance.
Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.
2018-03-01
We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.
Development of a Linear Stirling System Model with Varying Heat Inputs
NASA Technical Reports Server (NTRS)
Regan, Timothy F.; Lewandowski, Edward J.
2007-01-01
The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC's nonlinear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.
Testing and Qualifying Linear Integrated Circuits for Radiation Degradation in Space
NASA Technical Reports Server (NTRS)
Johnston, Allan H.; Rax, Bernard G.
2006-01-01
This paper discusses mechanisms and circuit-related factors that affect the degradation of linear integrated circuits from radiation in space. For some circuits there is sufficient degradation to affect performance at total dose levels below 4 krad(Si) because the circuit design techniques require higher gain for the pnp transistors that are the most sensitive to radiation. Qualification methods are recommended that include displacement damage as well as ionization damage.
When is quasi-linear theory exact. [particle acceleration
NASA Technical Reports Server (NTRS)
Jones, F. C.; Birmingham, T. J.
1975-01-01
We use the cumulant expansion technique of Kubo (1962, 1963) to derive an integrodifferential equation for the average one-particle distribution function for particles being accelerated by electric and magnetic fluctuations of a general nature. For a very restricted class of fluctuations, the equation for this function degenerates exactly to a differential equation of Fokker-Planck type. Quasi-linear theory, including the adiabatic assumption, is an exact theory only for this limited class of fluctuations.
Design Techniques for Uniform-DFT, Linear Phase Filter Banks
NASA Technical Reports Server (NTRS)
Sun, Honglin; DeLeon, Phillip
1999-01-01
Uniform-DFT filter banks are an important class of filter banks and their theory is well known. One notable characteristic is their very efficient implementation when using polyphase filters and the FFT. Separately, linear phase filter banks, i.e. filter banks in which the analysis filters have a linear phase are also an important class of filter banks and desired in many applications. Unfortunately, it has been proved that one cannot design critically-sampled, uniform-DFT, linear phase filter banks and achieve perfect reconstruction. In this paper, we present a least-squares solution to this problem and in addition prove that oversampled, uniform-DFT, linear phase filter banks (which are also useful in many applications) can be constructed for perfect reconstruction. Design examples are included illustrate the methods.
NASA Astrophysics Data System (ADS)
Tombak, Ali
The recent advancement in wireless communications demands an ever increasing improvement in the system performance and functionality with a reduced size and cost. This thesis demonstrates novel RF and microwave components based on ferroelectric and solid-state based tunable capacitor (varactor) technologies for the design of low-cost, small-size and multi-functional wireless communication systems. These include tunable lumped element VHF filters based on ferroelectric varactors, a beam-steering technique which, unlike conventional systems, does not require separate power divider and phase shifters, and a predistortion linearization technique that uses a varactor based tunable R-L-C resonator. Among various ferroelectric materials, Barium Strontium Titanate (BST) is actively being studied for the fabrication of high performance varactors at RF and microwave frequencies. BST based tunable capacitors are presented with typical tunabilities of 4.2:1 with the application of 5 to 10 V DC bias voltages and typical loss tangents in the range of 0.003--0.009 at VHF frequencies. Tunable lumped element lowpass and bandpass VHF filters based on BST varactors are also demonstrated with tunabilities of 40% and 57%, respectively. A new beam-steering technique is developed based on the extended resonance power dividing technique. Phased arrays based on this technique do not require separate power divider and phase shifters. Instead, the power division and phase shifting circuits are combined into a single circuit, which utilizes tunable capacitors. This results in a substantial reduction in the circuit complexity and cost. Phased arrays based on this technique can be employed in mobile multimedia services and automotive collision avoidance radars. A 2-GHz 4-antenna and a 10-GHz 8-antenna extended resonance phased arrays are demonstrated with scan ranges of 20 degrees and 18 degrees, respectively. A new predistortion linearization technique for the linearization of RF/microwave power amplifiers is also presented. This technique utilizes a varactor based tunable R-L-C resonator in shunt configuration. Due to the small number of circuit elements required, linearizers based on this technique offer low-cost and simple circuitry, hence can be utilized in handheld and cellular applications. A 1.8 GHz power amplifier with 9 dB gain is linearized using this technique. The linearizer improves the output 1-dB compression point of the power amplifier from 21 to 22.8 dBm. Adjacent channel power ratio (ACPR) is improved approximately 11 dB at an output RF power level of 17.5 dBm. The thesis is concluded by summarizing the main achievements and discussing the future work directions.
Linear Covariance Analysis For Proximity Operations Around Asteroid 2008 EV5
NASA Technical Reports Server (NTRS)
Wright, Cinnamon A.; Bhatt, Sagar; Woffinden, David; Strube, Matthew; D'Souza, Chris
2015-01-01
The NASA initiative to collect an asteroid, the Asteroid Robotic Redirect Mission (ARRM), is currently investigating the option of retrieving a boulder from an asteroid, demonstrating planetary defense with an enhanced gravity tractor technique, and returning it to a lunar orbit. Techniques for accomplishing this are being investigated by the Satellite Servicing Capabilities Office (SSCO) at NASA GSFC in collaboration with JPL, NASA JSC, LaRC, and Draper Laboratory, Inc. Two critical phases of the mission are the descent to the boulder and the Enhanced Gravity Tractor demonstration. A linear covariance analysis is done for these phases to assess the feasibility of these concepts with the proposed design of the sensor and actuator suite of the Asteroid Redirect Vehicle (ARV). The sensor suite for this analysis includes a wide field of view camera, LiDAR, and an IMU. The proposed asteroid of interest is currently the C-type asteroid 2008 EV5, a carbonaceous chondrite that is of high interest to the scientific community. This paper presents an overview of the linear covariance analysis techniques and simulation tool, provides sensor and actuator models, and addresses the feasibility of descending to the surface of the asteroid within allocated requirements as well as the possibility of maintaining a halo orbit to demonstrate the Enhanced Gravity Tractor technique.
Integration of remote sensing and surface geophysics in the detection of faults
NASA Technical Reports Server (NTRS)
Jackson, P. L.; Shuchman, R. A.; Wagner, H.; Ruskey, F.
1977-01-01
Remote sensing was included in a comprehensive investigation of the use of geophysical techniques to aid in underground mine placement. The primary objective was to detect faults and slumping, features which, due to structural weakness and excess water, cause construction difficulties and safety hazards in mine construction. Preliminary geologic reconnaissance was performed on a potential site for an underground oil shale mine in the Piceance Creek Basin of Colorado. LANDSAT data, black and white aerial photography and 3 cm radar imagery were obtained. LANDSAT data were primarily used in optical imagery and digital tape forms, both of which were analyzed and enhanced by computer techniques. The aerial photography and radar data offered supplemental information. Surface linears in the test area were located and mapped principally from LANDSAT data. A specific, relatively wide, linear pointed directly toward the test site, but did not extend into it. Density slicing, ratioing, and edge enhancement of the LANDSAT data all indicated the existence of this linear. Radar imagery marginally confirmed the linear, while aerial photography did not confirm it.
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
NASA Technical Reports Server (NTRS)
Simons, Rainee N.
2002-01-01
The paper presents a novel on-wafer, antenna far field pattern measurement technique for microelectromechanical systems (MEMS) based reconfigurable patch antennas. The measurement technique significantly reduces the time and the cost associated with the characterization of printed antennas, fabricated on a semiconductor wafer or dielectric substrate. To measure the radiation patterns, the RF probe station is modified to accommodate an open-ended rectangular waveguide as the rotating linearly polarized sampling antenna. The open-ended waveguide is attached through a coaxial rotary joint to a Plexiglas(Trademark) arm and is driven along an arc by a stepper motor. Thus, the spinning open-ended waveguide can sample the relative field intensity of the patch as a function of the angle from bore sight. The experimental results include the measured linearly polarized and circularly polarized radiation patterns for MEMS-based frequency reconfigurable rectangular and polarization reconfigurable nearly square patch antennas, respectively.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Electro-Optic Beam Steering Using Non-Linear Organic Materials
1993-08-01
York (SUNY), Buffalo, for potential application to the Hughes electro - optic beam deflector device. Evaluations include electro - optic coefficient...response time, transmission, and resistivity. Electro - optic coefficient measurements were made at 633 nm using a simple reflection technique. The
Human Language Technology: Opportunities and Challenges
2005-01-01
because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with
Advanced Millimeter-Wave Imaging Enhances Security Screening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheen, David M.; Bernacki, Bruce E.; McMakin, Douglas L.
2012-01-12
Millimeter-wave imaging is rapidly gaining acceptance for passenger screening at airports and other secured facilities. This paper details a number of techniques developed over the last several years including novel image reconstruction and display techniques, polarimetric imaging techniques, array switching schemes, as well as high frequency high bandwidth techniques. Implementation of some of these methods will increase the cost and complexity of the mm-wave security portal imaging systems. RF photonic methods may provide new solutions to the design and development of the sequentially switched linear mm-wave arrays that are the key element in the mm-wave portal imaging systems.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
Evaluation of ERTS imagery for spectral geological mapping in diverse terranes of New York State
NASA Technical Reports Server (NTRS)
Isachsen, Y. W.; Fakundiny, R. H.; Forster, S. W.
1974-01-01
Linear anomalies dominate the new geological information derived from ERTS-1 imagery, total lengths now exceeding 6000 km. Experimentation with a variety of viewing techniques suggests that conventional photogeologic analyses of band 7 results in the location of more than 97 percent of all linears found. The maxima on rose diagrams for ERTS-1 anomalies correspond well with those for mapped faults and topographic lineaments, despite a difference in relative magnitudes of maxima thought due to solar illumination direction. A multiscale analysis of linears showed that single topographic linears at 1:2,500,000 became segmented at 1:1,000,000, aligned zones of shorter parallel, en echelon, or conjugate linears at 1:500,000, and still shorter linears lacking obvious alignment at 1:250,000. Visible glacial features include individual drumlins, best seen in winter imagery, drumlinoids, eskers, ice-marginal drainage channels, glacial lake shorelines and sand plains, and end moraines.
Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that peoplemore » from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.« less
NASA Astrophysics Data System (ADS)
Parvathi, S. P.; Ramanan, R. V.
2018-06-01
An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Neighboring extremal optimal control design including model mismatch errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.J.; Hull, D.G.
1994-11-01
The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.
Comparison of some optimal control methods for the design of turbine blades
NASA Technical Reports Server (NTRS)
Desilva, B. M. E.; Grant, G. N. C.
1977-01-01
This paper attempts a comparative study of some numerical methods for the optimal control design of turbine blades whose vibration characteristics are approximated by Timoshenko beam idealizations with shear and incorporating simple boundary conditions. The blade was synthesized using the following methods: (1) conjugate gradient minimization of the system Hamiltonian in function space incorporating penalty function transformations, (2) projection operator methods in a function space which includes the frequencies of vibration and the control function, (3) epsilon-technique penalty function transformation resulting in a highly nonlinear programming problem, (4) finite difference discretization of the state equations again resulting in a nonlinear program, (5) second variation methods with complex state differential equations to include damping effects resulting in systems of inhomogeneous matrix Riccatti equations some of which are stiff, (6) quasi-linear methods based on iterative linearization of the state and adjoint equation. The paper includes a discussion of some substantial computational difficulties encountered in the implementation of these techniques together with a resume of work presently in progress using a differential dynamic programming approach.
A Review on Inertia and Linear Friction Welding of Ni-Based Superalloys
NASA Astrophysics Data System (ADS)
Chamanfar, Ahmad; Jahazi, Mohammad; Cormier, Jonathan
2015-04-01
Inertia and linear friction welding are being increasingly used for near-net-shape manufacturing of high-value materials in aerospace and power generation gas turbines because of providing a better quality joint and offering many advantages over conventional fusion welding and mechanical joining techniques. In this paper, the published works up-to-date on inertia and linear friction welding of Ni-based superalloys are reviewed with the objective to make clarifications on discrepancies and uncertainties reported in literature regarding issues related to these two friction welding processes as well as microstructure, texture, and mechanical properties of the Ni-based superalloy weldments. Initially, the chemical composition and microstructure of Ni-based superalloys that contribute to the quality of the joint are reviewed briefly. Then, problems related to fusion welding of these alloys are addressed with due consideration of inertia and linear friction welding as alternative techniques. The fundamentals of inertia and linear friction welding processes are analyzed next with emphasis on the bonding mechanisms and evolution of temperature and strain rate across the weld interface. Microstructural features, texture development, residual stresses, and mechanical properties of similar and dissimilar polycrystalline and single crystal Ni-based superalloy weldments are discussed next. Then, application of inertia and linear friction welding for joining Ni-based superalloys and related advantages over fusion welding, mechanical joining, and machining are explained briefly. Finally, present scientific and technological challenges facing inertia and linear friction welding of Ni-based superalloys including those related to modeling of these processes are addressed.
Robust Nonlinear Feedback Control of Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, William L.; Balas, Gary J.; Litt, Jonathan (Technical Monitor)
2001-01-01
This is the final report on the research performed under NASA Glen grant NASA/NAG-3-1975 concerning feedback control of the Pratt & Whitney (PW) STF 952, a twin spool, mixed flow, after burning turbofan engine. The research focussed on the design of linear and gain-scheduled, multivariable inner-loop controllers for the PW turbofan engine using H-infinity and linear, parameter-varying (LPV) control techniques. The nonlinear turbofan engine simulation was provided by PW within the NASA Rocket Engine Transient Simulator (ROCETS) simulation software environment. ROCETS was used to generate linearized models of the turbofan engine for control design and analysis as well as the simulation environment to evaluate the performance and robustness of the controllers. Comparison between the H-infinity, and LPV controllers are made with the baseline multivariable controller and developed by Pratt & Whitney engineers included in the ROCETS simulation. Simulation results indicate that H-infinity and LPV techniques effectively achieve desired response characteristics with minimal cross coupling between commanded values and are very robust to unmodeled dynamics and sensor noise.
NASA Technical Reports Server (NTRS)
Adams, W. M., Jr.; Tiffany, S. H.
1983-01-01
A control law is developed to suppress symmetric flutter for a mathematical model of an aeroelastic research vehicle. An implementable control law is attained by including modified LQG (linear quadratic Gaussian) design techniques, controller order reduction, and gain scheduling. An alternate (complementary) design approach is illustrated for one flight condition wherein nongradient-based constrained optimization techniques are applied to maximize controller robustness.
Aircraft model prototypes which have specified handling-quality time histories
NASA Technical Reports Server (NTRS)
Johnson, S. H.
1976-01-01
Several techniques for obtaining linear constant-coefficient airplane models from specified handling-quality time histories are discussed. One technique, the pseudodata method, solves the basic problem, yields specified eigenvalues, and accommodates state-variable transfer-function zero suppression. The method is fully illustrated for a fourth-order stability-axis small-motion model with three lateral handling-quality time histories specified. The FORTRAN program which obtains and verifies the model is included and fully documented.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Linear chirp phase perturbing approach for finding binary phased codes
NASA Astrophysics Data System (ADS)
Li, Bing C.
2017-05-01
Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.
Inverse dynamics of a 3 degree of freedom spatial flexible manipulator
NASA Technical Reports Server (NTRS)
Bayo, Eduardo; Serna, M.
1989-01-01
A technique is presented for solving the inverse dynamics and kinematics of 3 degree of freedom spatial flexible manipulator. The proposed method finds the joint torques necessary to produce a specified end effector motion. Since the inverse dynamic problem in elastic manipulators is closely coupled to the inverse kinematic problem, the solution of the first also renders the displacements and rotations at any point of the manipulator, including the joints. Furthermore the formulation is complete in the sense that it includes all the nonlinear terms due to the large rotation of the links. The Timoshenko beam theory is used to model the elastic characteristics, and the resulting equations of motion are discretized using the finite element method. An iterative solution scheme is proposed that relies on local linearization of the problem. The solution of each linearization is carried out in the frequency domain. The performance and capabilities of this technique are tested through simulation analysis. Results show the potential use of this method for the smooth motion control of space telerobots.
Optical systolic solutions of linear algebraic equations
NASA Technical Reports Server (NTRS)
Neuman, C. P.; Casasent, D.
1984-01-01
The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.
Influence of a Levelness Defect in a Thrust Bearing on the Dynamic Behaviour of AN Elastic Shaft
NASA Astrophysics Data System (ADS)
BERGER, S.; BONNEAU, O.; FRÊNE, J.
2002-01-01
This paper examines the non-linear dynamic behaviour of a flexible shaft. The shaft is mounted on two journal bearings and the axial load is supported by a defective hydrodynamic thrust bearing at one end. The defect is a levelness defect of the rotor. The thrust bearing behaviour must be considered to be non-linear because of the effects of the defect. The shaft is modelled with typical beam finite elements including effects such as the gyroscopic effects. A modal technique is used to reduce the number of degrees of freedom. Results show that the thrust bearing defects introduce supplementary critical speeds. The linear approach is unable to show the supplementary critical speeds which are obtained only by using non-linear analysis.
Advances in the microrheology of complex fluids
NASA Astrophysics Data System (ADS)
Waigh, Thomas Andrew
2016-07-01
New developments in the microrheology of complex fluids are considered. Firstly the requirements for a simple modern particle tracking microrheology experiment are introduced, the error analysis methods associated with it and the mathematical techniques required to calculate the linear viscoelasticity. Progress in microrheology instrumentation is then described with respect to detectors, light sources, colloidal probes, magnetic tweezers, optical tweezers, diffusing wave spectroscopy, optical coherence tomography, fluorescence correlation spectroscopy, elastic- and quasi-elastic scattering techniques, 3D tracking, single molecule methods, modern microscopy methods and microfluidics. New theoretical techniques are also reviewed such as Bayesian analysis, oversampling, inversion techniques, alternative statistical tools for tracks (angular correlations, first passage probabilities, the kurtosis, motor protein step segmentation etc), issues in micro/macro rheological agreement and two particle methodologies. Applications where microrheology has begun to make some impact are also considered including semi-flexible polymers, gels, microorganism biofilms, intracellular methods, high frequency viscoelasticity, comb polymers, active motile fluids, blood clots, colloids, granular materials, polymers, liquid crystals and foods. Two large emergent areas of microrheology, non-linear microrheology and surface microrheology are also discussed.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.
1989-01-01
The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.
Spin formalism and applications to new physics searches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, H.E.
1994-12-01
An introduction to spin techniques in particle physics is given. Among the topics covered are: helicity formalism and its applications to the decay and scattering of spin-1/2 and spin-1 particles, techniques for evaluating helicity amplitudes (including projection operator methods and the spinor helicity method), and density matrix techniques. The utility of polarization and spin correlations for untangling new physics beyond the Standard Model at future colliders such as the LHC and a high energy e{sup +}e{sup {minus}} linear collider is then considered. A number of detailed examples are explored including the search for low-energy supersymmetry, a non-minimal Higgs boson sector,more » and new gauge bosons beyond the W{sup {+-}} and Z.« less
NASA Astrophysics Data System (ADS)
Stroe, Gabriela; Andrei, Irina-Carmen; Frunzulica, Florin
2017-01-01
The objectives of this paper are the study and the implementation of both aerodynamic and propulsion models, as linear interpolations using look-up tables in a database. The aerodynamic and propulsion dependencies on state and control variable have been described by analytic polynomial models. Some simplifying hypotheses were made in the development of the nonlinear aircraft simulations. The choice of a certain technique to use depends on the desired accuracy of the solution and the computational effort to be expended. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. The engine power dynamic response was modeled with an additional state equation as first order lag in the actual power level response to commanded power level was computed as a function of throttle position. The number of control inputs and engine power states varied depending on the number of control surfaces and aircraft engines. The set of coupled, nonlinear, first-order ordinary differential equations that comprise the simulation model can be represented by the vector differential equation. A linear time-invariant (LTI) system representing aircraft dynamics for small perturbations about a reference trim condition is given by the state and output equations present. The gradients are obtained numerically by perturbing each state and control input independently and recording the changes in the trimmed state and output equations. This is done using the numerical technique of central finite differences, including the perturbations of the state and control variables. For a reference trim condition of straight and level flight, linearization results in two decoupled sets of linear, constant-coefficient differential equations for longitudinal and lateral / directional motion. The linearization is valid for small perturbations about the reference trim condition. Experimental aerodynamic and thrust data are used to model the applied aerodynamic and propulsion forces and moments for arbitrary states and controls. There is no closed form solution to such problems, so the equations must be solved using numerical integration. Techniques for solving this initial value problem for ordinary differential equations are employed to obtain approximate solutions at discrete points along the aircraft state trajectory.
Overcoming learning barriers through knowledge management.
Dror, Itiel E; Makany, Tamas; Kemp, Jonathan
2011-02-01
The ability to learn highly depends on how knowledge is managed. Specifically, different techniques for note-taking utilize different cognitive processes and strategies. In this paper, we compared dyslexic and control participants when using linear and non-linear note-taking. All our participants were professionals working in the banking and financial sector. We examined comprehension, accuracy, mental imagery & complexity, metacognition, and memory. We found that participants with dyslexia, when using a non-linear note-taking technique outperformed the control group using linear note-taking and matched the performance of the control group using non-linear note-taking. These findings emphasize how different knowledge management techniques can avoid some of the barriers to learners. Copyright © 2010 John Wiley & Sons, Ltd.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
A New Ghost Cell/Level Set Method for Moving Boundary Problems: Application to Tumor Growth
Macklin, Paul
2011-01-01
In this paper, we present a ghost cell/level set method for the evolution of interfaces whose normal velocity depend upon the solutions of linear and nonlinear quasi-steady reaction-diffusion equations with curvature-dependent boundary conditions. Our technique includes a ghost cell method that accurately discretizes normal derivative jump boundary conditions without smearing jumps in the tangential derivative; a new iterative method for solving linear and nonlinear quasi-steady reaction-diffusion equations; an adaptive discretization to compute the curvature and normal vectors; and a new discrete approximation to the Heaviside function. We present numerical examples that demonstrate better than 1.5-order convergence for problems where traditional ghost cell methods either fail to converge or attain at best sub-linear accuracy. We apply our techniques to a model of tumor growth in complex, heterogeneous tissues that consists of a nonlinear nutrient equation and a pressure equation with geometry-dependent jump boundary conditions. We simulate the growth of glioblastoma (an aggressive brain tumor) into a large, 1 cm square of brain tissue that includes heterogeneous nutrient delivery and varied biomechanical characteristics (white matter, gray matter, cerebrospinal fluid, and bone), and we observe growth morphologies that are highly dependent upon the variations of the tissue characteristics—an effect observed in real tumor growth. PMID:21331304
NASA Technical Reports Server (NTRS)
Subrahmanyam, K. B.; Kaza, K. R. V.; Brown, G. V.; Lawrence, C.
1986-01-01
The coupled bending-bending-torsional equations of dynamic motion of rotating, linearly pretwisted blades are derived including large precone, second degree geometric nonlinearities and Coriolis effects. The equations are solved by the Galerkin method and a linear perturbation technique. Accuracy of the present method is verified by comparisons of predicted frequencies and steady state deflections with those from MSC/NASTRAN and from experiments. Parametric results are generated to establish where inclusion of only the second degree geometric nonlinearities is adequate. The nonlinear terms causing torsional divergence in thin blades are identified. The effects of Coriolis terms and several other structurally nonlinear terms are studied, and their relative importance is examined.
NASA Technical Reports Server (NTRS)
Subrahmanyam, K. B.; Kaza, K. R. V.; Brown, G. V.; Lawrence, C.
1987-01-01
The coupled bending-bending-torsional equations of dynamic motion of rotating, linearly pretwisted blades are derived including large precone, second degree geometric nonlinearities and Coriolis effects. The equations are solved by the Galerkin method and a linear perturbation technique. Accuracy of the present method is verified by conparisons of predicted frequencies and steady state deflections with those from MSC/NASTRAN and from experiments. Parametric results are generated to establish where inclusion of only the second degree geometric nonlinearities is adequate. The nonlinear terms causing torsional divergence in thin blades are identified. The effects of Coriolis terms and several other structurally nonlinear terms are studied, and their relative importance is examined.
Linear Self-Referencing Techiques for Short-Optical-Pulse Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorrer, C.; Kang, I.
2008-04-04
Linear self-referencing techniques for the characterization of the electric field of short optical pulses are presented. The theoretical and practical advantages of these techniques are developed. Experimental implementations are described, and their performance is compared to the performance of their nonlinear counterparts. Linear techniques demonstrate unprecedented sensitivity and are a perfect fit in many domains where the precise, accurate measurement of the electric field of an optical pulse is required.
Exact and heuristic algorithms for Space Information Flow.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng
2018-01-01
Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.
NASA Astrophysics Data System (ADS)
Frouin, Jerome; Matikas, Theodore E.; Na, Jeong K.; Sathish, Shamachary
1999-02-01
An in-situ technique to measure sound velocity, ultrasonic attenuation and acoustic nonlinear property has been developed for characterization and early detection of fatigue damage in aerospace materials. A previous experiment using the f-2f technique on Ti-6Al-4V dog bone specimen fatigued at different stage of fatigue has shown that the material nonlinearity exhibit large change compared to the other ultrasonic parameter. Real-time monitoring of the nonlinearity may be a future tool to characterize early fatigue damage in the material. For this purpose we have developed a computer software and measurement technique including hardware for the automation of the measurement. New transducer holder and special grips are designed. The automation has allowed us to test the long-term stability of the electronics over a period of time and so proof of the linearity of the system. For the first time, a real-time experiment has been performed on a dog-bone specimen from zero fatigue al the way to the final fracture.
NASA Technical Reports Server (NTRS)
Schutz, Bob E.; Baker, Gregory A.
1997-01-01
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques
NASA Technical Reports Server (NTRS)
Lee, Hanbong; Malik, Waqar; Jung, Yoon C.
2016-01-01
Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.
Li, Zhe; Erkilinc, M Sezer; Galdino, Lidia; Shi, Kai; Thomsen, Benn C; Bayvel, Polina; Killey, Robert I
2016-12-12
Single-polarization direct-detection transceivers may offer advantages compared to digital coherent technology for some metro, back-haul, access and inter-data center applications since they offer low-cost and complexity solutions. However, a direct-detection receiver introduces nonlinearity upon photo detection, since it is a square-law device, which results in signal distortion due to signal-signal beat interference (SSBI). Consequently, it is desirable to develop effective and low-cost SSBI compensation techniques to improve the performance of such transceivers. In this paper, we compare the performance of a number of recently proposed digital signal processing-based SSBI compensation schemes, including the use of single- and two-stage linearization filters, an iterative linearization filter and a SSBI estimation and cancellation technique. Their performance is assessed experimentally using a 7 × 25 Gb/s wavelength division multiplexed (WDM) single-sideband 16-QAM Nyquist-subcarrier modulation system operating at a net information spectral density of 2.3 (b/s)/Hz.
NASA Astrophysics Data System (ADS)
Baker, Gregory Allen
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
Linear programming computational experience with onyx
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atrek, E.
1994-12-31
ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.
Amplitude effects on the dynamic performance of hydrostatic gas thrust bearings
NASA Technical Reports Server (NTRS)
Stiffler, A. K.; Tapia, R. R.
1979-01-01
A strip gas film bearing with inherently compensated inlets is analyzed to determine the effect of disturbance amplitude on its dynamic performance. The governing Reynolds' equation is solved using finite-difference techniques. The time dependent load capacity is represented by a Fourier series up to and including the third harmonics. For the range of amplitudes investigated the linear stiffness was independent of the amplitude, and the linear damping was inversely proportional to (1 - epsilon-squared) to the 1.5 power where epsilon is the amplitude relative to the film thickness.
Progress in multidisciplinary design optimization at NASA Langley
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
1993-01-01
Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.
The Stonehenge technique. A method for aligning coherent bremsstrahlung radiators
NASA Astrophysics Data System (ADS)
Livingston, Ken
2009-05-01
This paper describes a technique for the alignment of crystal radiators used to produce high energy, linearly polarized photons via coherent bremsstrahlung scattering at electron beam facilities. In these experiments the crystal is mounted on a goniometer which is used to adjust its orientation relative to the electron beam. The angles and equations which relate the crystal lattice, goniometer and electron beam direction are presented here, and the method of alignment is illustrated with data taken at MAMI (the Mainz microtron). A practical guide to setting up a coherent bremsstrahlung facility and installing new crystals using this technique is also included.
On Convergence Acceleration Techniques for Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1998-01-01
A discussion of convergence acceleration techniques as they relate to computational fluid dynamics problems on unstructured meshes is given. Rather than providing a detailed description of particular methods, the various different building blocks of current solution techniques are discussed and examples of solution strategies using one or several of these ideas are given. Issues relating to unstructured grid CFD problems are given additional consideration, including suitability of algorithms to current hardware trends, memory and cpu tradeoffs, treatment of non-linearities, and the development of efficient strategies for handling anisotropy-induced stiffness. The outlook for future potential improvements is also discussed.
Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, B. S.
1972-01-01
A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.
Neelly, Kurt R; Terry, Joseph G; Morris, Martin J
2010-01-01
A relatively new and scarcely researched technique to increase strength is the use of supplemental heavy chain resistance (SHCR) in conjunction with plate weights to provide variable resistance to free weight exercises. The purpose of this case study was to determine the actual resistance being provided by a double-looped versus a linear hung SHCR to the back squat exercise. The linear technique simply hangs the chain directly from the bar, whereas the double-looped technique uses a smaller chain to adjust the height of the looped chain. In both techniques, as the squat descends, chain weight is unloaded onto the floor, and as the squat ascends, chain weight is progressively loaded back as resistance. One experienced and trained male weight lifter (age = 33 yr; height = 1.83 m; weight = 111.4 kg) served as the subject. Plate weight was set at 84.1 kg, approximately 50% of the subject's 1 repetition maximum. The SHCR was affixed to load cells, sampling at a frequency of 500 Hz, which were affixed to the Olympic bar. Data were collected as the subject completed the back squat under the following conditions: double-looped 1 chain (9.6 kg), double-looped 2 chains (19.2 kg), linear 1 chain, and linear 2 chains. The double-looped SHCR resulted in a 78-89% unloading of the chain weight at the bottom of the squat, whereas the linear hanging SHCR resulted in only a 36-42% unloading. The double-looped technique provided nearly 2 times the variable resistance at the top of the squat compared with the linear hanging technique, showing that attention must be given to the technique used to hang SHCR.
Post-processing through linear regression
NASA Astrophysics Data System (ADS)
van Schaeybroeck, B.; Vannitsem, S.
2011-03-01
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Derivative information recovery by a selective integration technique
NASA Technical Reports Server (NTRS)
Johnson, M. A.
1974-01-01
A nonlinear stationary homogeneous digital filter DIRSIT (derivative information recovery by a selective integration technique) is investigated. The spectrum of a quasi-linear discrete describing function (DDF) to DIRSIT is obtained by a digital measuring scheme. A finite impulse response (FIR) approximation to the quasi-linearization is then obtained. Finally, DIRSIT is compared with its quasi-linear approximation and with a standard digital differentiating technique. Results indicate the effects of DIRSIT on a wide variety of practical signals.
Linearization of digital derived rate algorithm for use in linear stability analysis
NASA Technical Reports Server (NTRS)
Graham, R. E.; Porada, T. W.
1985-01-01
The digital derived rate (DDR) algorithm is used to calculate the rate of rotation of the Centaur upper-stage rocket. The DDR is highly nonlinear algorithm, and classical linear stability analysis of the spacecraft cannot be performed without linearization. The performance of this rate algorithm is characterized by a gain and phase curve that drop off at the same frequency. This characteristic is desirable for many applications. A linearization technique for the DDR algorithm is investigated. The linearization method is described. Examples of the results of the linearization technique are illustrated, and the effects of linearization are described. A linear digital filter may be used as a substitute for performing classical linear stability analyses, while the DDR itself may be used in time response analysis.
Biostatistics Series Module 10: Brief Overview of Multivariate Methods.
Hazra, Avijit; Gogtay, Nithya
2017-01-01
Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.
Transverse mucoperiosteal flap inset by rotation for cleft palate repair: technique and outcomes.
Black, Jonathan S; Gampper, Thomas J
2014-01-01
Cleft palate is a relatively common deformity with various techniques described for its repair. Most techniques address the hard palate portion of the cleft with bilateral mucoperiosteal flaps transposed to the midline. This results in superimposed, linear closure layers directly over the cleft and may predispose the repair to oronasal fistula formation. This report details an alternative technique of flap rotation with an outcome analysis. A retrospective chart analysis was performed of all patients having undergone primary palatoplasty for cleft palate. Demographics and cleft Veau type were recorded. Postoperative speech outcomes were assessed by standardized speech evaluation performed by 2 speech language pathologists. The presence and location of oronasal fistulae was assessed and recorded by the surgeon and speech language pathologists in follow-up evaluations. The study revealed an overall incidence of velopharyngeal insufficiency of 5.7% using this surgical technique. It also revealed a fistula rate of 8.6%. Secondary surgery has been successful in those patients in which it was indicated. Eleven (31%) patients were diagnosed with Robin sequence. This technique demonstrates excellent early outcomes in a difficult subset of cleft patients including a high proportion of those with Pierre Robin sequence. The technique addresses the inherent disadvantages to a linear closure over the bony cleft. The variability in its design provides the surgeon another option for correction of this deformity.
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Using crosscorrelation techniques to determine the impulse response of linear systems
NASA Technical Reports Server (NTRS)
Dallabetta, Michael J.; Li, Harry W.; Demuth, Howard B.
1993-01-01
A crosscorrelation method of measuring the impulse response of linear systems is presented. The technique, implementation, and limitations of this method are discussed. A simple system is designed and built using discrete components and the impulse response of a linear circuit is measured. Theoretical and software simulation results are presented.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
The Performance of A Sampled Data Delay Lock Loop Implemented with a Kalman Loop Filter.
1980-01-01
que for analysis is computer simulation. Other techniques include state variable techniques and z-transform methods. Since the Kalman filter is linear...LOGIC NOT SHOWN Figure 2. Block diagram of the sampled data delay lock loop (SDDLL) Es A/ A 3/A/ Figure 3. Sampled error voltage ( Es ) as a function of...from a sum of two components. The first component is the previous filtered es - timate advanced one step forward by the state transition matrix. The 8
Broadband linearisation of high-efficiency power amplifiers
NASA Technical Reports Server (NTRS)
Kenington, Peter B.; Parsons, Kieran J.; Bennett, David W.
1993-01-01
A feedforward-based amplifier linearization technique is presented which is capable of yielding significant improvements in both linearity and power efficiency over conventional amplifier classes (e.g. class-A or class-AB). Theoretical and practical results are presented showing that class-C stages may be used for both the main and error amplifiers yielding practical efficiencies well in excess of 30 percent, with theoretical efficiencies of much greater than 40 percent being possible. The levels of linearity which may be achieved are required for most satellite systems, however if greater linearity is required, the technique may be used in addition to conventional pre-distortion techniques.
Advanced statistical methods for improved data analysis of NASA astrophysics missions
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.
1992-01-01
The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
Real, J; Cleries, R; Forné, C; Roso-Llorach, A; Martínez-Sánchez, J M
In medicine and biomedical research, statistical techniques like logistic, linear, Cox and Poisson regression are widely known. The main objective is to describe the evolution of multivariate techniques used in observational studies indexed in PubMed (1970-2013), and to check the requirements of the STROBE guidelines in the author guidelines in Spanish journals indexed in PubMed. A targeted PubMed search was performed to identify papers that used logistic linear Cox and Poisson models. Furthermore, a review was also made of the author guidelines of journals published in Spain and indexed in PubMed and Web of Science. Only 6.1% of the indexed manuscripts included a term related to multivariate analysis, increasing from 0.14% in 1980 to 12.3% in 2013. In 2013, 6.7, 2.5, 3.5, and 0.31% of the manuscripts contained terms related to logistic, linear, Cox and Poisson regression, respectively. On the other hand, 12.8% of journals author guidelines explicitly recommend to follow the STROBE guidelines, and 35.9% recommend the CONSORT guideline. A low percentage of Spanish scientific journals indexed in PubMed include the STROBE statement requirement in the author guidelines. Multivariate regression models in published observational studies such as logistic regression, linear, Cox and Poisson are increasingly used both at international level, as well as in journals published in Spanish. Copyright © 2015 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.
Li, Zhaoying; Zhou, Wenjie; Liu, Hao
2016-09-01
This paper addresses the nonlinear robust tracking controller design problem for hypersonic vehicles. This problem is challenging due to strong coupling between the aerodynamics and the propulsion system, and the uncertainties involved in the vehicle dynamics including parametric uncertainties, unmodeled model uncertainties, and external disturbances. By utilizing the feedback linearization technique, a linear tracking error system is established with prescribed references. For the linear model, a robust controller is proposed based on the signal compensation theory to guarantee that the tracking error dynamics is robustly stable. Numerical simulation results are given to show the advantages of the proposed nonlinear robust control method, compared to the robust loop-shaping control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan
1997-08-01
One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.
Autonomous Non-Linear Classification of LPI Radar Signal Modulations
2007-09-01
Wigner - Ville distribution ( WVD ), the Choi-Williams distribution (CWD) and a Quadrature...accomplished using the images from the Wigner - Ville distribution and the Choi-Williams distribution for polyphase modulations. For the WVD images, radon...this work. Four detection techniques including the Wigner - Ville distribution ( WVD ), the Choi-Williams distribution (CWD), Quadrature Mirror
Uranium Detection - Technique Validation Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colletti, Lisa Michelle; Garduno, Katherine; Lujan, Elmer J.
As a LANL activity for DOE/NNSA in support of SHINE Medical Technologies™ ‘Accelerator Technology’ we have been investigating the application of UV-vis spectroscopy for uranium analysis in solution. While the technique has been developed specifically for sulfate solutions, the proposed SHINE target solutions, it can be adapted to a range of different solution matrixes. The FY15 work scope incorporated technical development that would improve accuracy, specificity, linearity & range, precision & ruggedness, and comparative analysis. Significant progress was achieved throughout FY 15 addressing these technical challenges, as is summarized in this report. In addition, comparative analysis of unknown samples usingmore » the Davies-Gray titration technique highlighted the importance of controlling temperature during analysis (impacting both technique accuracy and linearity/range). To fully understand the impact of temperature, additional experimentation and data analyses were performed during FY16. The results from this FY15/FY16 work were presented in a detailed presentation, LA-UR-16-21310, and an update of this presentation is included with this short report summarizing the key findings. The technique is based on analysis of the most intense U(VI) absorbance band in the visible region of the uranium spectra in 1 M H 2SO 4, at λ max = 419.5 nm.« less
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
NASA Astrophysics Data System (ADS)
Yan, Yong; Cui, Xiwang; Guo, Miao; Han, Xiaojuan
2016-11-01
Seal capacity is of great importance for the safety operation of pressurized vessels. It is crucial to locate the leak hole timely and accurately for reasons of safety and maintenance. This paper presents the principle and application of a linear acoustic emission sensor array and a near-field beamforming technique to identify the location of a continuous CO2 leak from an isotropic flat-surface structure on a pressurized vessel in the carbon capture and storage system. Acoustic signals generated by the leak hole are collected using a linear high-frequency sensor array. Time-frequency analysis and a narrow-band filtering technique are deployed to extract effective information about the leak. The impacts of various factors on the performance of the localization technique are simulated, compared and discussed, including the number of sensors, distance between the leak hole and sensor array and spacing between adjacent sensors. Experiments were carried out on a laboratory-scale test rig to assess the effectiveness and operability of the proposed method. The results obtained suggest that the proposed method is capable of providing accurate and reliable localization of a continuous CO2 leak.
A study of data analysis techniques for the multi-needle Langmuir probe
NASA Astrophysics Data System (ADS)
Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.
2018-06-01
In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.
Liu, Chia-Chuan; Shih, Chih-Shiun; Pennarun, Nicolas; Cheng, Chih-Tao
2016-01-01
The feasibility and radicalism of lymph node dissection for lung cancer surgery by a single-port technique has frequently been challenged. We performed a retrospective cohort study to investigate this issue. Two chest surgeons initiated multiple-port thoracoscopic surgery in a 180-bed cancer centre in 2005 and shifted to a single-port technique gradually after 2010. Data, including demographic and clinical information, from 389 patients receiving multiport thoracoscopic lobectomy or segmentectomy and 149 consecutive patients undergoing either single-port lobectomy or segmentectomy for primary non-small-cell lung cancer were retrieved and entered for statistical analysis by multivariable linear regression models and Box-Cox transformed multivariable analysis. The mean number of total dissected lymph nodes in the lobectomy group was 28.5 ± 11.7 for the single-port group versus 25.2 ± 11.3 for the multiport group; the mean number of total dissected lymph nodes in the segmentectomy group was 19.5 ± 10.8 for the single-port group versus 17.9 ± 10.3 for the multiport group. In linear multivariable and after Box-Cox transformed multivariable analyses, the single-port approach was still associated with a higher total number of dissected lymph nodes. The total number of dissected lymph nodes for primary lung cancer surgery by single-port video-assisted thoracoscopic surgery (VATS) was higher than by multiport VATS in univariable, multivariable linear regression and Box-Cox transformed multivariable analyses. This study confirmed that highly effective lymph node dissection could be achieved through single-port VATS in our setting. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
A linear model fails to predict orientation selectivity of cells in the cat visual cortex.
Volgushev, M; Vidyasagar, T R; Pei, X
1996-01-01
1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828
Dierker, Lisa; Rose, Jennifer; Tan, Xianming; Li, Runze
2010-12-01
This paper describes and compares a selection of available modeling techniques for identifying homogeneous population subgroups in the interest of informing targeted substance use intervention. We present a nontechnical review of the common and unique features of three methods: (a) trajectory analysis, (b) functional hierarchical linear modeling (FHLM), and (c) decision tree methods. Differences among the techniques are described, including required data features, strengths and limitations in terms of the flexibility with which outcomes and predictors can be modeled, and the potential of each technique for helping to inform the selection of targets and timing of substance intervention programs.
NASA Technical Reports Server (NTRS)
Mcknight, R. L.
1985-01-01
A series of interdisciplinary modeling and analysis techniques that were specialized to address three specific hot section components are presented. These techniques will incorporate data as well as theoretical methods from many diverse areas including cycle and performance analysis, heat transfer analysis, linear and nonlinear stress analysis, and mission analysis. Building on the proven techniques already available in these fields, the new methods developed will be integrated into computer codes to provide an accurate, and unified approach to analyzing combustor burner liners, hollow air cooled turbine blades, and air cooled turbine vanes. For these components, the methods developed will predict temperature, deformation, stress and strain histories throughout a complete flight mission.
Three-dimensional radar imaging techniques and systems for near-field applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheen, David M.; Hall, Thomas E.; McMakin, Douglas L.
2016-05-12
The Pacific Northwest National Laboratory has developed three-dimensional holographic (synthetic aperture) radar imaging techniques and systems for a wide variety of near-field applications. These applications include radar cross-section (RCS) imaging, personnel screening, standoff concealed weapon detection, concealed threat detection, through-barrier imaging, ground penetrating radar (GPR), and non-destructive evaluation (NDE). Sequentially-switched linear arrays are used for many of these systems to enable high-speed data acquisition and 3-D imaging. In this paper, the techniques and systems will be described along with imaging results that demonstrate the utility of near-field 3-D radar imaging for these compelling applications.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions
NASA Astrophysics Data System (ADS)
Huang, Zhi; Fan, Baozheng; Song, Xiaolin
2018-03-01
As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.
Arbitrary-Order Conservative and Consistent Remapping and a Theory of Linear Maps: Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ullrich, Paul A.; Devendran, Dharshi; Johansen, Hans
2016-04-01
The focus on this series of articles is on the generation of accurate, conservative, consistent, and (optionally) monotone linear offline maps. This paper is the second in the series. It extends on the first part by describing four examples of 2D linear maps that can be constructed in accordance with the theory of the earlier work. The focus is again on spherical geometry, although these techniques can be readily extended to arbitrary manifolds. The four maps include conservative, consistent, and (optionally) monotone linear maps (i) between two finite-volume meshes, (ii) from finite-volume to finite-element meshes using a projection-type approach, (iii)more » from finite-volume to finite-element meshes using volumetric integration, and (iv) between two finite-element meshes. Arbitrary order of accuracy is supported for each of the described nonmonotone maps.« less
NASA Astrophysics Data System (ADS)
Quan, Naicheng; Zhang, Chunmin; Mu, Tingkui; Li, Qiwei
2018-05-01
The principle and experimental demonstration of a method based on channeled polarimetric technique (CPT) to measure spectrally resolved linearly Stokes parameters (SRLS) is presented. By replacing front retarder with an achromatic quarter wave-plate of CPT, the linearly SRLS can be measured simultaneously. It also retains the advantages of static and compact of CPT. Besides, comparing with CPT, it can reduce the RMS error by nearly a factor of 2-5 for the individual linear Stokes parameters.
New Optical Transforms For Statistical Image Recognition
NASA Astrophysics Data System (ADS)
Lee, Sing H.
1983-12-01
In optical implementation of statistical image recognition, new optical transforms on large images for real-time recognition are of special interest. Several important linear transformations frequently used in statistical pattern recognition have now been optically implemented, including the Karhunen-Loeve transform (KLT), the Fukunaga-Koontz transform (FKT) and the least-squares linear mapping technique (LSLMT).1-3 The KLT performs principle components analysis on one class of patterns for feature extraction. The FKT performs feature extraction for separating two classes of patterns. The LSLMT separates multiple classes of patterns by maximizing the interclass differences and minimizing the intraclass variations.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
Modeling and Control of a Fixed Wing Tilt-Rotor Tri-Copter
NASA Astrophysics Data System (ADS)
Summers, Alexander
The following thesis considers modeling and control of a fixed wing tilt-rotor tri-copter. An emphasis of the conceptual design is made toward payload transport. Aerodynamic panel code and CAD design provide the base aerodynamic, geometric, mass, and inertia properties. A set of non-linear dynamics are created considering gravity, aerodynamics in vertical takeoff and landing (VTOL) and forward flight, and propulsion applied to a three degree of freedom system. A transition strategy, that removes trajectory planning by means of scheduled inputs, is theorized. Three discrete controllers, utilizing separate control techniques, are applied to ensure stability in the aerodynamic regions of VTOL, transition, and forward flight. The controller techniques include linear quadratic regulation, full state integral action, gain scheduling, and proportional integral derivative (PID) flight control. Simulation of the model control system for flight from forward to backward transition is completed with mass and center of gravity variation.
Biomolecular Imaging with Coherent Nonlinear Vibrational Microscopy
Chung, Chao-Yu; Boik, John; Potma, Eric O.
2014-01-01
Optical imaging with spectroscopic vibrational contrast is a label-free solution for visualizing, identifying, and quantifying a wide range of biomolecular compounds in biological materials. Both linear and nonlinear vibrational microscopy techniques derive their imaging contrast from infrared active or Raman allowed molecular transitions, which provide a rich palette for interrogating chemical and structural details of the sample. Yet nonlinear optical methods, which include both second-order sum-frequency generation (SFG) and third-order coherent Raman scattering (CRS) techniques, offer several improved imaging capabilities over their linear precursors. Nonlinear vibrational microscopy features unprecedented vibrational imaging speeds, provides strategies for higher spatial resolution, and gives access to additional molecular parameters. These advances have turned vibrational microscopy into a premier tool for chemically dissecting live cells and tissues. This review discusses the molecular contrast of SFG and CRS microscopy and highlights several of the advanced imaging capabilities that have impacted biological and biomedical research. PMID:23245525
Evaluation of ERTS-1 imagery for spectral geological mapping in diverse terranes of New York State
NASA Technical Reports Server (NTRS)
Isachsen, Y. W. (Principal Investigator); Fakundiny, R. H.; Forster, S. W.
1973-01-01
The author has identified the following significant results. Linear anomalies dominate the new geological information derived from ERTS-1 imagery, total lengths now exceeding 6000 km. Experimentation with a variety of viewing techniques suggest that conventional photogeologic analyses of band 7 results in the location of more than 97 percent of all linears found. Bedrock lithologic types are distinguishable only where they are topographically expressed or govern land use signatures. The maxima on rose diagrams for ERTS-1 anomalies correspond well with those for mapped faults and topographic lineaments. A multiscale analysis of linears showed that single topographic linears at 1:2,500,000 became dashed linears at 1:1,000,000 aligned zones of shorter parallel, en echelon, or conjugate linears at 1:500,00. Most circular features found were explained away by U-2 airphoto analysis but several remain as anomalies. Visible glacial features include individual drumlins, best seen in winter imagery, drumlinoids, eskers, ice-marginal drainage channels, glacial lake shorelines and sand plains, and end moraines.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
Construction accident narrative classification: An evaluation of text mining techniques.
Goh, Yang Miang; Ubeynarayana, C U
2017-11-01
Learning from past accidents is fundamental to accident prevention. Thus, accident and near miss reporting are encouraged by organizations and regulators. However, for organizations managing large safety databases, the time taken to accurately classify accident and near miss narratives will be very significant. This study aims to evaluate the utility of various text mining classification techniques in classifying 1000 publicly available construction accident narratives obtained from the US OSHA website. The study evaluated six machine learning algorithms, including support vector machine (SVM), linear regression (LR), random forest (RF), k-nearest neighbor (KNN), decision tree (DT) and Naive Bayes (NB), and found that SVM produced the best performance in classifying the test set of 251 cases. Further experimentation with tokenization of the processed text and non-linear SVM were also conducted. In addition, a grid search was conducted on the hyperparameters of the SVM models. It was found that the best performing classifiers were linear SVM with unigram tokenization and radial basis function (RBF) SVM with uni-gram tokenization. In view of its relative simplicity, the linear SVM is recommended. Across the 11 labels of accident causes or types, the precision of the linear SVM ranged from 0.5 to 1, recall ranged from 0.36 to 0.9 and F1 score was between 0.45 and 0.92. The reasons for misclassification were discussed and suggestions on ways to improve the performance were provided. Copyright © 2017 Elsevier Ltd. All rights reserved.
Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.
Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko
2016-03-01
In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Opto-electronic characterization of third-generation solar cells.
Neukom, Martin; Züfle, Simon; Jenatsch, Sandra; Ruhstaller, Beat
2018-01-01
We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC 70 BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified.
Mathematical Optimization Techniques
NASA Technical Reports Server (NTRS)
Bellman, R. (Editor)
1963-01-01
The papers collected in this volume were presented at the Symposium on Mathematical Optimization Techniques held in the Santa Monica Civic Auditorium, Santa Monica, California, on October 18-20, 1960. The objective of the symposium was to bring together, for the purpose of mutual education, mathematicians, scientists, and engineers interested in modern optimization techniques. Some 250 persons attended. The techniques discussed included recent developments in linear, integer, convex, and dynamic programming as well as the variational processes surrounding optimal guidance, flight trajectories, statistical decisions, structural configurations, and adaptive control systems. The symposium was sponsored jointly by the University of California, with assistance from the National Science Foundation, the Office of Naval Research, the National Aeronautics and Space Administration, and The RAND Corporation, through Air Force Project RAND.
Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.
2008-12-01
To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.
Rotor vibration caused by external excitation and rub
NASA Technical Reports Server (NTRS)
Matsushita, O.; Takagi, M.; Kikuchi, K.; Kaga, M.
1982-01-01
For turbomachinery with low natural frequencies, considerations have been recently required for rotor vibrations caused by external forces except unbalance one, such as foundation motion, seismic wave, rub and so forth. Such a forced vibration is investigated analytically and experimentally in the present paper. Vibrations in a rotor-bearing system under a harmonic excitation are analyzed by the modal technique in the case of a linear system including gyroscopic effect. For a nonlinear system a new and powerful quasi-modal technique is developed and applied to the vibration caused by rub.
Graphing techniques for materials laboratory using Excel
NASA Technical Reports Server (NTRS)
Kundu, Nikhil K.
1994-01-01
Engineering technology curricula stress hands on training and laboratory practices in most of the technical courses. Laboratory reports should include analytical as well as graphical evaluation of experimental data. Experience shows that many students neither have the mathematical background nor the expertise for graphing. This paper briefly describes the procedure and data obtained from a number of experiments such as spring rate, stress concentration, endurance limit, and column buckling for a variety of materials. Then with a brief introduction to Microsoft Excel the author explains the techniques used for linear regression and logarithmic graphing.
NASA Astrophysics Data System (ADS)
Rodrigues, Gonçalo C.; Duflou, Joost R.
2018-02-01
This paper offers an in-depth look into beam shaping and polarization control as two of the most promising techniques for improving industrial laser cutting of metal sheets. An assessment model is developed for the study of such effects. It is built upon several modifications to models as available in literature in order to evaluate the potential of a wide range of considered concepts. This includes different kinds of beam shaping (achieved by extra-cavity optical elements or asymmetric diode staking) and polarization control techniques (linear, cross, radial, azimuthal). A fully mathematical description and solution procedure are provided. Three case studies for direct diode lasers follow, containing both experimental data and parametric studies. In the first case study, linear polarization is analyzed for any given angle between the cutting direction and the electrical field. In the second case several polarization strategies are compared for similar cut conditions, evaluating, for example, the minimum number of spatial divisions of a segmented polarized laser beam to achieve a target performance. A novel strategy, based on a 12-division linear-to-radial polarization converter with an axis misalignment and capable of improving cutting efficiency with more than 60%, is proposed. The last case study reveals different insights in beam shaping techniques, with an example of a beam shape optimization path for a 30% improvement in cutting efficiency. The proposed techniques are not limited to this type of laser source, neither is the model dedicated to these specific case studies. Limitations of the model and opportunities are further discussed.
NASA Astrophysics Data System (ADS)
Cornelius, Reinold R.; Voight, Barry
1995-03-01
The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.
ERIC Educational Resources Information Center
Stage, Frances K.
The nature and use of LISREL (LInear Structural RELationships) analysis are considered, including an examination of college students' commitment to a university. LISREL is a fairly new causal analysis technique that has broad application in the social sciences and that employs structural equation estimation. The application examined in this paper…
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
NASA Technical Reports Server (NTRS)
Kibler, K. S.; Mcdaniel, G. A.
1981-01-01
A digital local linearization technique was used to solve a system of stiff differential equations which simulate a magnetic bearing assembly. The results prove the technique to be accurate, stable, and efficient when compared to a general purpose variable order Adams method with a stiff option.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granville, DA; Sawakuchi, GO
2014-08-15
In this work, we demonstrate inconsistencies in commonly used Monte Carlo methods of scoring linear energy transfer (LET) in proton therapy beams. In particle therapy beams, the LET is an important parameter because the relative biological effectiveness (RBE) depends on it. LET is often determined using Monte Carlo techniques. We used a realistic Monte Carlo model of a proton therapy nozzle to score proton LET in spread-out Bragg peak (SOBP) depth-dose distributions. We used three different scoring and calculation techniques to determine average LET at varying depths within a 140 MeV beam with a 4 cm SOBP and a 250more » MeV beam with a 10 cm SOBP. These techniques included fluence-weighted (Φ-LET) and dose-weighted average (D-LET) LET calculations from: 1) scored energy spectra converted to LET spectra through a lookup table, 2) directly scored LET spectra and 3) accumulated LET scored ‘on-the-fly’ during simulations. All protons (primary and secondary) were included in the scoring. Φ-LET was found to be less sensitive to changes in scoring technique than D-LET. In addition, the spectral scoring methods were sensitive to low-energy (high-LET) cutoff values in the averaging. Using cutoff parameters chosen carefully for consistency between techniques, we found variations in Φ-LET values of up to 1.6% and variations in D-LET values of up to 11.2% for the same irradiation conditions, depending on the method used to score LET. Variations were largest near the end of the SOBP, where the LET and energy spectra are broader.« less
Comparison of lossless compression techniques for prepress color images
NASA Astrophysics Data System (ADS)
Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.
1998-12-01
In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stiebel-Kalish, Hadas, E-mail: kalishhadas@gmail.com; Sackler School of Medicine, Tel Aviv University, Tel Aviv; Reich, Ehud
Purpose: Meningiomas threatening the anterior visual pathways (AVPs) and not amenable for surgery are currently treated with multisession stereotactic radiotherapy. Stereotactic radiotherapy is available with a number of devices. The most ubiquitous include the gamma knife, CyberKnife, tomotherapy, and isocentric linear accelerator systems. The purpose of our study was to describe a case series of AVP meningiomas treated with linear accelerator fractionated stereotactic radiotherapy (FSRT) using the multiple, noncoplanar, dynamic conformal rotation paradigm and to compare the success and complication rates with those reported for other techniques. Patients and Methods: We included all patients with AVP meningiomas followed up atmore » our neuro-ophthalmology unit for a minimum of 12 months after FSRT. We compared the details of the neuro-ophthalmologic examinations and tumor size before and after FSRT and at the end of follow-up. Results: Of 87 patients with AVP meningiomas, 17 had been referred for FSRT. Of the 17 patients, 16 completed >12 months of follow-up (mean 39). Of the 16 patients, 11 had undergone surgery before FSRT and 5 had undergone FSRT as first-line management. Tumor control was achieved in 14 of the 16 patients, with three meningiomas shrinking in size after RT. Two meningiomas progressed, one in an area that was outside the radiation field. The visual function had improved in 6 or stabilized in 8 of the 16 patients (88%) and worsened in 2 (12%). Conclusions: Linear accelerator fractionated RT using the multiple noncoplanar dynamic rotation conformal paradigm can be offered to patients with meningiomas that threaten the anterior visual pathways as an adjunct to surgery or as first-line treatment, with results comparable to those reported for other stereotactic RT techniques.« less
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Assessing FRET using Spectral Techniques
Leavesley, Silas J.; Britain, Andrea L.; Cichon, Lauren K.; Nikolaev, Viacheslav O.; Rich, Thomas C.
2015-01-01
Förster resonance energy transfer (FRET) techniques have proven invaluable for probing the complex nature of protein–protein interactions, protein folding, and intracellular signaling events. These techniques have traditionally been implemented with the use of one or more fluorescence band-pass filters, either as fluorescence microscopy filter cubes, or as dichroic mirrors and band-pass filters in flow cytometry. In addition, new approaches for measuring FRET, such as fluorescence lifetime and acceptor photobleaching, have been developed. Hyperspectral techniques for imaging and flow cytometry have also shown to be promising for performing FRET measurements. In this study, we have compared traditional (filter-based) FRET approaches to three spectral-based approaches: the ratio of acceptor-to-donor peak emission, linear spectral unmixing, and linear spectral unmixing with a correction for direct acceptor excitation. All methods are estimates of FRET efficiency, except for one-filter set and three-filter set FRET indices, which are included for consistency with prior literature. In the first part of this study, spectrofluorimetric data were collected from a CFP–Epac–YFP FRET probe that has been used for intracellular cAMP measurements. All comparisons were performed using the same spectrofluorimetric datasets as input data, to provide a relevant comparison. Linear spectral unmixing resulted in measurements with the lowest coefficient of variation (0.10) as well as accurate fits using the Hill equation. FRET efficiency methods produced coefficients of variation of less than 0.20, while FRET indices produced coefficients of variation greater than 8.00. These results demonstrate that spectral FRET measurements provide improved response over standard, filter-based measurements. Using spectral approaches, single-cell measurements were conducted through hyperspectral confocal microscopy, linear unmixing, and cell segmentation with quantitative image analysis. Results from these studies confirmed that spectral imaging is effective for measuring subcellular, time-dependent FRET dynamics and that additional fluorescent signals can be readily separated from FRET signals, enabling multilabel studies of molecular interactions. PMID:23929684
Assessing FRET using spectral techniques.
Leavesley, Silas J; Britain, Andrea L; Cichon, Lauren K; Nikolaev, Viacheslav O; Rich, Thomas C
2013-10-01
Förster resonance energy transfer (FRET) techniques have proven invaluable for probing the complex nature of protein-protein interactions, protein folding, and intracellular signaling events. These techniques have traditionally been implemented with the use of one or more fluorescence band-pass filters, either as fluorescence microscopy filter cubes, or as dichroic mirrors and band-pass filters in flow cytometry. In addition, new approaches for measuring FRET, such as fluorescence lifetime and acceptor photobleaching, have been developed. Hyperspectral techniques for imaging and flow cytometry have also shown to be promising for performing FRET measurements. In this study, we have compared traditional (filter-based) FRET approaches to three spectral-based approaches: the ratio of acceptor-to-donor peak emission, linear spectral unmixing, and linear spectral unmixing with a correction for direct acceptor excitation. All methods are estimates of FRET efficiency, except for one-filter set and three-filter set FRET indices, which are included for consistency with prior literature. In the first part of this study, spectrofluorimetric data were collected from a CFP-Epac-YFP FRET probe that has been used for intracellular cAMP measurements. All comparisons were performed using the same spectrofluorimetric datasets as input data, to provide a relevant comparison. Linear spectral unmixing resulted in measurements with the lowest coefficient of variation (0.10) as well as accurate fits using the Hill equation. FRET efficiency methods produced coefficients of variation of less than 0.20, while FRET indices produced coefficients of variation greater than 8.00. These results demonstrate that spectral FRET measurements provide improved response over standard, filter-based measurements. Using spectral approaches, single-cell measurements were conducted through hyperspectral confocal microscopy, linear unmixing, and cell segmentation with quantitative image analysis. Results from these studies confirmed that spectral imaging is effective for measuring subcellular, time-dependent FRET dynamics and that additional fluorescent signals can be readily separated from FRET signals, enabling multilabel studies of molecular interactions. © 2013 International Society for Advancement of Cytometry. Copyright © 2013 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
Peirce, Anthony P.; Rabitz, Herschel
1988-08-01
The boundary element (BE) technique is used to analyze the effect of defects on one-dimensional chemically active surfaces. The standard BE algorithm for diffusion is modified to include the effects of bulk desorption by making use of an asymptotic expansion technique to evaluate influences near boundaries and defect sites. An explicit time evolution scheme is proposed to treat the non-linear equations associated with defect sites. The proposed BE algorithm is shown to provide an efficient and convergent algorithm for modelling localized non-linear behavior. Since it exploits the actual Green's function of the linear diffusion-desorption process that takes place on the surface, the BE algorithm is extremely stable. The BE algorithm is applied to a number of interesting physical problems in which non-linear reactions occur at localized defects. The Lotka-Volterra system is considered in which the source, sink and predator-prey interaction terms are distributed at different defect sites in the domain and in which the defects are coupled by diffusion. This example provides a stringent test of the stability of the numerical algorithm. Marginal stability oscillations are analyzed for the Prigogine-Lefever reaction that occurs on a lattice of defects. Dissipative effects are observed for large perturbations to the marginal stability state, and rapid spatial reorganization of uniformly distributed initial perturbations is seen to take place. In another series of examples the effect of defect locations on the balance between desorptive processes on chemically active surfaces is considered. The effect of dynamic pulsing at various time-scales is considered for a one species reactive trapping model. Similar competitive behavior between neighboring defects previously observed for static adsorption levels is shown to persist for dynamic loading of the surface. The analysis of a more complex three species reaction process also provides evidence of competitive behavior between neighboring defect sites. The proposed BE algorithm is shown to provide a useful technique for analyzing the effect of defect sites on chemically active surfaces.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Linear time relational prototype based learning.
Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara
2012-10-01
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
Mathematical Techniques for Nonlinear System Theory.
1981-09-01
This report deals with research results obtained in the following areas: (1) Finite-dimensional linear system theory by algebraic methods--linear...Infinite-dimensional linear systems--realization theory of infinite-dimensional linear systems; (3) Nonlinear system theory --basic properties of
Schneiderman, Eva; Colón, Ellen L; White, Donald J; Schemehorn, Bruce; Ganovsky, Tara; Haider, Amir; Garcia-Godoy, Franklin; Morrow, Brian R; Srimaneepong, Viritpon; Chumprasert, Sujin
2017-09-01
We have previously reported on progress toward the refinement of profilometry-based abrasivity testing of dentifrices using a V8 brushing machine and tactile or optical measurement of dentin wear. The general application of this technique may be advanced by demonstration of successful inter-laboratory confirmation of the method. The objective of this study was to explore the capability of different laboratories in the assessment of dentifrice abrasivity using a profilometry-based evaluation technique developed in our Mason laboratories. In addition, we wanted to assess the interchangeability of human and bovine specimens. Participating laboratories were instructed in methods associated with Radioactive Dentin Abrasivity-Profilometry Equivalent (RDA-PE) evaluation, including site visits to discuss critical elements of specimen preparation, masking, profilometry scanning, and procedures. Laboratories were likewise instructed on the requirement for demonstration of proportional linearity as a key condition for validation of the technique. Laboratories were provided with four test dentifrices, blinded for testing, with a broad range of abrasivity. In each laboratory, a calibration curve was developed for varying V8 brushing strokes (0, 4,000, and 10,000 strokes) with the ISO abrasive standard. Proportional linearity was determined as the ratio of standard abrasion mean depths created with 4,000 and 10,000 strokes (2.5 fold differences). Criteria for successful calibration within the method (established in our Mason laboratory) was set at proportional linearity = 2.5 ± 0.3. RDA-PE was compared to Radiotracer RDA for the four test dentifrices, with the latter obtained by averages from three independent Radiotracer RDA sites. Individual laboratories and their results were compared by 1) proportional linearity and 2) acquired RDA-PE values for test pastes. Five sites participated in the study. One site did not pass proportional linearity objectives. Data for this site are not reported at the request of the researchers. Three of the remaining four sites reported herein tested human dentin and all three met proportional linearity objectives for human dentin. Three of four sites participated in testing bovine dentin and all three met the proportional linearity objectives for bovine dentin. RDA-PE values for test dentifrices were similar between sites. All four sites that met proportional linearity requirement successfully identified the dentifrice formulated above the industry standard 250 RDA (as RDA-PE). The profilometry method showed at least as good reproducibility and differentiation as Radiotracer assessments. It was demonstrated that human and bovine specimens could be used interchangeably. The standardized RDA-PE method was reproduced in multiple laboratories in this inter-laboratory study. Evidence supports that this method is a suitable technique for ISO method 11609 Annex B.
NASA Astrophysics Data System (ADS)
Dar, Aasif Bashir; Jha, Rakesh Kumar
2017-03-01
Various dispersion compensation units are presented and evaluated in this paper. These dispersion compensation units include dispersion compensation fiber (DCF), DCF merged with fiber Bragg grating (FBG) (joint technique), and linear, square root, and cube root chirped tanh apodized FBG. For the performance evaluation 10 Gb/s NRZ transmission system over 100-km-long single-mode fiber is used. The three chirped FBGs are optimized individually to yield pulse width reduction percentage (PWRP) of 86.66, 79.96, 62.42% for linear, square root, and cube root, respectively. The DCF and Joint technique both provide a remarkable PWRP of 94.45 and 96.96%, respectively. The performance of optimized linear chirped tanh apodized FBG and DCF is compared for long-haul transmission system on the basis of quality factor of received signal. For both the systems maximum transmission distance is calculated such that quality factor is ≥ 6 at the receiver and result shows that performance of FBG is comparable to that of DCF with advantages of very low cost, small size and reduced nonlinear effects.
An SVM-based solution for fault detection in wind turbines.
Santos, Pedro; Villa, Luisa F; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús
2015-03-09
Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets.
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Sparse 4D TomoSAR imaging in the presence of non-linear deformation
NASA Astrophysics Data System (ADS)
Khwaja, Ahmed Shaharyar; ćetin, Müjdat
2018-04-01
In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.
Theoretical and software considerations for nonlinear dynamic analysis
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1983-01-01
In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.
The dynamics and control of large flexible space structures, 6
NASA Technical Reports Server (NTRS)
Bainum, P. M.
1983-01-01
The controls analysis based on a truncated finite element model of the 122m. Hoop/Column Antenna System focuses on an analysis of the controllability as well as the synthesis of control laws. Graph theoretic techniques are employed to consider controllability for different combinations of number and locations of actuators. Control law synthesis is based on an application of the linear regulator theory as well as pole placement techniques. Placement of an actuator on the hoop can result in a noticeable improvement in the transient characteristics. The problem of orientation and shape control of an orbiting flexible beam, previously examined, is now extended to include the influence of solar radiation environmental forces. For extremely flexible thin structures modification of control laws may be required and techniques for accomplishing this are explained. Effects of environmental torques are also included in previously developed models of orbiting flexible thin platforms.
Schneiderman, Eva; Colón, Ellen; White, Donald J; St John, Samuel
2015-01-01
The purpose of this study was to compare the abrasivity of commercial dentifrices by two techniques: the conventional gold standard radiotracer-based Radioactive Dentin Abrasivity (RDA) method; and a newly validated technique based on V8 brushing that included a profilometry-based evaluation of dentin wear. This profilometry-based method is referred to as RDA-Profilometry Equivalent, or RDA-PE. A total of 36 dentifrices were sourced from four global dentifrice markets (Asia Pacific [including China], Europe, Latin America, and North America) and tested blindly using both the standard radiotracer (RDA) method and the new profilometry method (RDA-PE), taking care to follow specific details related to specimen preparation and treatment. Commercial dentifrices tested exhibited a wide range of abrasivity, with virtually all falling well under the industry accepted upper limit of 250; that is, 2.5 times the level of abrasion measured using an ISO 11609 abrasivity reference calcium pyrophosphate as the reference control. RDA and RDA-PE comparisons were linear across the entire range of abrasivity (r2 = 0.7102) and both measures exhibited similar reproducibility with replicate assessments. RDA-PE assessments were not just linearly correlated, but were also proportional to conventional RDA measures. The linearity and proportionality of the results of the current study support that both methods (RDA or RDA-PE) provide similar results and justify a rationale for making the upper abrasivity limit of 250 apply to both RDA and RDA-PE.
Improving medium-range ensemble streamflow forecasts through statistical post-processing
NASA Astrophysics Data System (ADS)
Mendoza, Pablo; Wood, Andy; Clark, Elizabeth; Nijssen, Bart; Clark, Martyn; Ramos, Maria-Helena; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
Probabilistic hydrologic forecasts are a powerful source of information for decision-making in water resources operations. A common approach is the hydrologic model-based generation of streamflow forecast ensembles, which can be implemented to account for different sources of uncertainties - e.g., from initial hydrologic conditions (IHCs), weather forecasts, and hydrologic model structure and parameters. In practice, hydrologic ensemble forecasts typically have biases and spread errors stemming from errors in the aforementioned elements, resulting in a degradation of probabilistic properties. In this work, we compare several statistical post-processing techniques applied to medium-range ensemble streamflow forecasts obtained with the System for Hydromet Applications, Research and Prediction (SHARP). SHARP is a fully automated prediction system for the assessment and demonstration of short-term to seasonal streamflow forecasting applications, developed by the National Center for Atmospheric Research, University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. The suite of post-processing techniques includes linear blending, quantile mapping, extended logistic regression, quantile regression, ensemble analogs, and the generalized linear model post-processor (GLMPP). We assess and compare these techniques using multi-year hindcasts in several river basins in the western US. This presentation discusses preliminary findings about the effectiveness of the techniques for improving probabilistic skill, reliability, discrimination, sharpness and resolution.
NASA Technical Reports Server (NTRS)
Sheen, Jyh-Jong; Bishop, Robert H.
1992-01-01
The feedback linearization technique is applied to the problem of spacecraft attitude control and momentum management with control moment gyros (CMGs). The feedback linearization consists of a coordinate transformation, which transforms the system to a companion form, and a nonlinear feedback control law to cancel the nonlinear dynamics resulting in a linear equivalent model. Pole placement techniques are then used to place the closed-loop poles. The coordinate transformation proposed here evolves from three output functions of relative degree four, three, and two, respectively. The nonlinear feedback control law is presented. Stability in a neighborhood of a controllable torque equilibrium attitude (TEA) is guaranteed and this fact is demonstrated by the simulation results. An investigation of the nonlinear control law shows that singularities exist in the state space outside the neighborhood of the controllable TEA. The nonlinear control law is simplified by a standard linearization technique and it is shown that the linearized nonlinear controller provides a natural way to select control gains for the multiple-input, multiple-output system. Simulation results using the linearized nonlinear controller show good performance relative to the nonlinear controller in the neighborhood of the TEA.
A FORTRAN program for the analysis of linear continuous and sample-data systems
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1976-01-01
A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.
Linear laser diode arrays for improvement in optical disk recording for space stations
NASA Technical Reports Server (NTRS)
Alphonse, G. A.; Carlin, D. B.; Connolly, J. C.
1990-01-01
The design and fabrication of individually addressable laser diode arrays for high performance magneto-optic recording systems are presented. Ten diode arrays with 30 mW cW light output, linear light vs. current characteristics and single longitudinal mode spectrum were fabricated using channel substrate planar (CSP) structures. Preliminary results on the inverse CSP structure, whose fabrication is less critically dependent on device parameters than the CSP, are also presented. The impact of systems parameters and requirements, in particular, the effect of feedback on laser design is assessed, and techniques to reduce feedback or minimize its effect on systems performance, including mode-stabilized structures, are evaluated.
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
Opto-electronic characterization of third-generation solar cells
Jenatsch, Sandra
2018-01-01
Abstract We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC70BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified. PMID:29707069
Hierarchy of simulation models for a turbofan gas engine
NASA Technical Reports Server (NTRS)
Longenbaker, W. E.; Leake, R. J.
1977-01-01
Steady-state and transient performance of an F-100-like turbofan gas engine are modeled by a computer program, DYNGEN, developed by NASA. The model employs block data maps and includes about 25 states. Low-order nonlinear analytical and linear techniques are described in terms of their application to the model. Experimental comparisons illustrating the accuracy of each model are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Scott A; Catalfamo, Simone; Brake, Matthew R. W.
2017-01-01
In the study of the dynamics of nonlinear systems, experimental measurements often convolute the response of the nonlinearity of interest and the effects of the experimental setup. To reduce the influence of the experimental setup on the deduction of the parameters of the nonlinearity, the response of a mechanical joint is investigated under various experimental setups. These experiments first focus on quantifying how support structures and measurement techniques affect the natural frequency and damping of a linear system. The results indicate that support structures created from bungees have negligible influence on the system in terms of frequency and damping ratiomore » variations. The study then focuses on the effects of the excitation technique on the response for a linear system. The findings suggest that thinner stingers should not be used, because under the high force requirements the stinger bending modes are excited adding unwanted torsional coupling. The optimal configuration for testing the linear system is then applied to a nonlinear system in order to assess the robustness of the test configuration. Finally, recommendations are made for conducting experiments on nonlinear systems using conventional/linear testing techniques.« less
HYDRORECESSION: A toolbox for streamflow recession analysis
NASA Astrophysics Data System (ADS)
Arciniega, S.
2015-12-01
Streamflow recession curves are hydrological signatures allowing to study the relationship between groundwater storage and baseflow and/or low flows at the catchment scale. Recent studies have showed that streamflow recession analysis can be quite sensitive to the combination of different models, extraction techniques and parameter estimation methods. In order to better characterize streamflow recession curves, new methodologies combining multiple approaches have been recommended. The HYDRORECESSION toolbox, presented here, is a Matlab graphical user interface developed to analyse streamflow recession time series with the support of different tools allowing to parameterize linear and nonlinear storage-outflow relationships through four of the most useful recession models (Maillet, Boussinesq, Coutagne and Wittenberg). The toolbox includes four parameter-fitting techniques (linear regression, lower envelope, data binning and mean squared error) and three different methods to extract hydrograph recessions segments (Vogel, Brutsaert and Aksoy). In addition, the toolbox has a module that separates the baseflow component from the observed hydrograph using the inverse reservoir algorithm. Potential applications provided by HYDRORECESSION include model parameter analysis, hydrological regionalization and classification, baseflow index estimates, catchment-scale recharge and low-flows modelling, among others. HYDRORECESSION is freely available for non-commercial and academic purposes.
NASA Technical Reports Server (NTRS)
Epton, Michael A.; Magnus, Alfred E.
1990-01-01
An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the Panel Aerodynamics (PAN AIR) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformation, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments. Principal revisions to version 3.0 are the following: (1) appendices H and K more fully describe the Aerodynamic Influence Coefficient (AIC) construction; (2) appendix L now provides a complete description of the AIC solution process; (3) appendix P is new and discusses the theory for the new FDP module (which calculates streamlines and offbody points); and (4) numerous small corrections and revisions reflecting the MAG module rewrite.
Linear and nonlinear stability of the Blasius boundary layer
NASA Technical Reports Server (NTRS)
Bertolotti, F. P.; Herbert, TH.; Spalart, P. R.
1992-01-01
Two new techniques for the study of the linear and nonlinear instability in growing boundary layers are presented. The first technique employs partial differential equations of parabolic type exploiting the slow change of the mean flow, disturbance velocity profiles, wavelengths, and growth rates in the streamwise direction. The second technique solves the Navier-Stokes equation for spatially evolving disturbances using buffer zones adjacent to the inflow and outflow boundaries. Results of both techniques are in excellent agreement. The linear and nonlinear development of Tollmien-Schlichting (TS) waves in the Blasius boundary layer is investigated with both techniques and with a local procedure based on a system of ordinary differential equations. The results are compared with previous work and the effects of non-parallelism and nonlinearity are clarified. The effect of nonparallelism is confirmed to be weak and, consequently, not responsible for the discrepancies between measurements and theoretical results for parallel flow.
Ranking Forestry Investments With Parametric Linear Programming
Paul A. Murphy
1976-01-01
Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.
Application of a sensitivity analysis technique to high-order digital flight control systems
NASA Technical Reports Server (NTRS)
Paduano, James D.; Downing, David R.
1987-01-01
A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.
Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F
2012-01-01
Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Shafie, Suhaidi; Kawahito, Shoji; Halin, Izhal Abdul; Hasan, Wan Zuha Wan
2009-01-01
The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region.
Analysis of Learning Curve Fitting Techniques.
1987-09-01
1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1982-01-01
A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
Application of Conjugate Gradient methods to tidal simulation
Barragy, E.; Carey, G.F.; Walters, R.A.
1993-01-01
A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.
Active distribution network planning considering linearized system loss
NASA Astrophysics Data System (ADS)
Li, Xiao; Wang, Mingqiang; Xu, Hao
2018-02-01
In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.
Chiropractic biophysics technique: a linear algebra approach to posture in chiropractic.
Harrison, D D; Janik, T J; Harrison, G R; Troyanovich, S; Harrison, D E; Harrison, S O
1996-10-01
This paper discusses linear algebra as applied to human posture in chiropractic, specifically chiropractic biophysics technique (CBP). Rotations, reflections and translations are geometric functions studied in vector spaces in linear algebra. These mathematical functions are termed rigid body transformations and are applied to segmental spinal movement in the literature. Review of the literature indicates that these linear algebra concepts have been used to describe vertebral motion. However, these rigid body movers are presented here as applying to the global postural movements of the head, thoracic cage and pelvis. The unique inverse functions of rotations, reflections and translations provide a theoretical basis for making postural corrections in neutral static resting posture. Chiropractic biophysics technique (CBP) uses these concepts in examination procedures, manual spinal manipulation, instrument assisted spinal manipulation, postural exercises, extension traction and clinical outcome measures.
NASA Technical Reports Server (NTRS)
Isachsen, Y. W. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Linear anomalies dominate the new geological information derived from ERTS-1 imagery, total lengths now exceeding 6000 kms. Experimentation with a variety of viewing techniques suggests that conventional photogeologic analyses of band 7 results in the location of more than 97 percent of all linears found. Bedrock lithologic types are distinguishable only where they are topographically expressed or govern land use signatures. The maxima on rose diagrams for ERTS-1 anomalies correspond well with those for mapped faults and topographic lineaments, despite a difference in relative magnitudes of maxima thought due to solar illumination direction. A multiscale analysis of linears showed that single topographic linears at 1:2,500,000 became dashed jugate linears at 1:500,000, and shorter linears lacking any conspicuous zonal alignment at 1:250,000. Most circular features found were explained away by U-2 airphoto analysis but several remain as anomalies. Visible glacial features include individual drumlins, best seen in winter imagery, drumlinoids, eskers, ice-marginal drainage channels, glacial lake shorelines and sand plains, and end moraines.
Chaos as an intermittently forced linear system.
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kaiser, Eurika; Kutz, J Nathan
2017-05-30
Understanding the interplay of order and disorder in chaos is a central challenge in modern quantitative science. Approximate linear representations of nonlinear dynamics have long been sought, driving considerable interest in Koopman theory. We present a universal, data-driven decomposition of chaos as an intermittently forced linear system. This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with forcing by low-energy delay coordinates; this is called the Hankel alternative view of Koopman (HAVOK) analysis. This analysis is applied to the Lorenz system and real-world examples including Earth's magnetic field reversal and measles outbreaks. In each case, forcing statistics are non-Gaussian, with long tails corresponding to rare intermittent forcing that precedes switching and bursting phenomena. The forcing activity demarcates coherent phase space regions where the dynamics are approximately linear from those that are strongly nonlinear.The huge amount of data generated in fields like neuroscience or finance calls for effective strategies that mine data to reveal underlying dynamics. Here Brunton et al.develop a data-driven technique to analyze chaotic systems and predict their dynamics in terms of a forced linear model.
High Precision Linear And Circular Polarimetry. Sources With Stable Stokes Q,U & V In The Ghz Regime
NASA Astrophysics Data System (ADS)
Myserlis, Ioannis; Angelakis, E.; Zensus, J. A.
2017-10-01
We present a novel data analysis pipeline for the reconstruction of the linear and circular polarization parameters of radio sources. It includes several correction steps to minimize the effect of instrumental polarization, allowing the detection of linear and circular polarization degrees as low as 0.3 %. The instrumental linear polarization is corrected across the whole telescope beam and significant Stokes Q and U can be recovered even when the recorded signals are severely corrupted. The instrumental circular polarization is corrected with two independent techniques which yield consistent Stokes V results. The accuracy we reach is of the order of 0.1-0.2 % for the polarization degree and 1\\u00ba for the angle. We used it to recover the polarization of around 150 active galactic nuclei that were monitored monthly between 2010.6 and 2016.3 with the Effelsberg 100-m telescope. We identified sources with stable polarization parameters that can be used as polarization standards. Five sources have stable linear polarization; three are linearly unpolarized; eight have stable polarization angle; and 11 sources have stable circular polarization, four of which with non-zero Stokes V.
A high-fidelity method to analyze perturbation evolution in turbulent flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unnikrishnan, S., E-mail: sasidharannair.1@osu.edu; Gaitonde, Datta V., E-mail: gaitonde.3@osu.edu
2016-04-01
Small perturbation propagation in fluid flows is usually examined by linearizing the governing equations about a steady basic state. It is often useful, however, to study perturbation evolution in the unsteady evolving turbulent environment. Such analyses can elucidate the role of perturbations in the generation of coherent structures or the production of noise from jet turbulence. The appropriate equations are still the linearized Navier–Stokes equations, except that the linearization must be performed about the instantaneous evolving turbulent state, which forms the coefficients of the linearized equations. This is a far more difficult problem since in addition to the turbulent state,more » its rate of change and the perturbation field are all required at each instant. In this paper, we develop and use a novel technique for this problem by using a pair (denoted “baseline” and “twin”) of simultaneous synchronized Large-Eddy Simulations (LES). At each time-step, small disturbances whose propagation characteristics are to be studied, are introduced into the twin through a forcing term. At subsequent time steps, the difference between the two simulations is shown to be equivalent to solving the forced Navier–Stokes equations, linearized about the instantaneous turbulent state. The technique does not put constraints on the forcing, which could be arbitrary, e.g., white noise or other stochastic variants. We consider, however, “native” forcing having properties of disturbances that exist naturally in the turbulent environment. The method then isolates the effect of turbulence in a particular region on the rest of the field, which is useful in the study of noise source localization. The synchronized technique is relatively simple to implement into existing codes. In addition to minimizing the storage and retrieval of large time-varying datasets, it avoids the need to explicitly linearize the governing equations, which can be a very complicated task for viscous terms or turbulence closures. The method is illustrated by application to a well-validated Mach 1.3 jet. Specifically, the effects of turbulence on the jet lipline and core collapse regions on the near-acoustic field are isolated. The properties of the method, including linearity and effect of initial transients, are discussed. The results provide insight into how turbulence from different parts of the jet contribute to the observed dominance of low and high frequency content at shallow and sideline angles, respectively.« less
A high-fidelity method to analyze perturbation evolution in turbulent flows
NASA Astrophysics Data System (ADS)
Unnikrishnan, S.; Gaitonde, Datta V.
2016-04-01
Small perturbation propagation in fluid flows is usually examined by linearizing the governing equations about a steady basic state. It is often useful, however, to study perturbation evolution in the unsteady evolving turbulent environment. Such analyses can elucidate the role of perturbations in the generation of coherent structures or the production of noise from jet turbulence. The appropriate equations are still the linearized Navier-Stokes equations, except that the linearization must be performed about the instantaneous evolving turbulent state, which forms the coefficients of the linearized equations. This is a far more difficult problem since in addition to the turbulent state, its rate of change and the perturbation field are all required at each instant. In this paper, we develop and use a novel technique for this problem by using a pair (denoted "baseline" and "twin") of simultaneous synchronized Large-Eddy Simulations (LES). At each time-step, small disturbances whose propagation characteristics are to be studied, are introduced into the twin through a forcing term. At subsequent time steps, the difference between the two simulations is shown to be equivalent to solving the forced Navier-Stokes equations, linearized about the instantaneous turbulent state. The technique does not put constraints on the forcing, which could be arbitrary, e.g., white noise or other stochastic variants. We consider, however, "native" forcing having properties of disturbances that exist naturally in the turbulent environment. The method then isolates the effect of turbulence in a particular region on the rest of the field, which is useful in the study of noise source localization. The synchronized technique is relatively simple to implement into existing codes. In addition to minimizing the storage and retrieval of large time-varying datasets, it avoids the need to explicitly linearize the governing equations, which can be a very complicated task for viscous terms or turbulence closures. The method is illustrated by application to a well-validated Mach 1.3 jet. Specifically, the effects of turbulence on the jet lipline and core collapse regions on the near-acoustic field are isolated. The properties of the method, including linearity and effect of initial transients, are discussed. The results provide insight into how turbulence from different parts of the jet contribute to the observed dominance of low and high frequency content at shallow and sideline angles, respectively.
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
Linear Programming for Vocational Education Planning. Interim Report.
ERIC Educational Resources Information Center
Young, Robert C.; And Others
The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…
NASA Astrophysics Data System (ADS)
Avitabile, Peter; O'Callahan, John
2009-01-01
Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.
Detection of Genetically Modified Sugarcane by Using Terahertz Spectroscopy and Chemometrics
NASA Astrophysics Data System (ADS)
Liu, J.; Xie, H.; Zha, B.; Ding, W.; Luo, J.; Hu, C.
2018-03-01
A methodology is proposed to identify genetically modified sugarcane from non-genetically modified sugarcane by using terahertz spectroscopy and chemometrics techniques, including linear discriminant analysis (LDA), support vector machine-discriminant analysis (SVM-DA), and partial least squares-discriminant analysis (PLS-DA). The classification rate of the above mentioned methods is compared, and different types of preprocessing are considered. According to the experimental results, the best option is PLS-DA, with an identification rate of 98%. The results indicated that THz spectroscopy and chemometrics techniques are a powerful tool to identify genetically modified and non-genetically modified sugarcane.
New Methodologies for Generation of Multigroup Cross Sections for Shielding Applications
NASA Astrophysics Data System (ADS)
Arzu Alpan, F.; Haghighat, Alireza
2003-06-01
Coupled neutron and gamma multigroup (broad-group) libraries used for Light Water Reactor shielding and dosimetry commonly include 47-neutron and 20-gamma groups. These libraries are derived from the 199-neutron, 42-gamma fine-group VITAMIN-B6 library. In this paper, we introduce modifications to the generation procedure of the broad-group libraries. Among these modifications, we show that the fine-group structure and collapsing technique have the largest impact. We demonstrate that a more refined fine-group library and the bi-linear adjoint weighting collapsing technique can improve the accuracy of transport calculation results.
Pauchard, Y; Smith, M; Mintchev, M
2004-01-01
Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.
Comparison of System Identification Techniques for the Hydraulic Manipulator Test Bed (HMTB)
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1996-01-01
In this thesis linear, dynamic, multivariable state-space models for three joints of the ground-based Hydraulic Manipulator Test Bed (HMTB) are identified. HMTB, housed at the NASA Langley Research Center, is a ground-based version of the Dexterous Orbital Servicing System (DOSS), a representative space station manipulator. The dynamic models of the HMTB manipulator will first be estimated by applying nonparametric identification methods to determine each joint's response characteristics using various input excitations. These excitations include sum of sinusoids, pseudorandom binary sequences (PRBS), bipolar ramping pulses, and chirp input signals. Next, two different parametric system identification techniques will be applied to identify the best dynamical description of the joints. The manipulator is localized about a representative space station orbital replacement unit (ORU) task allowing the use of linear system identification methods. Comparisons, observations, and results of both parametric system identification techniques are discussed. The thesis concludes by proposing a model reference control system to aid in astronaut ground tests. This approach would allow the identified models to mimic on-orbit dynamic characteristics of the actual flight manipulator thus providing astronauts with realistic on-orbit responses to perform space station tasks in a ground-based environment.
Comparison of heaving buoy and oscillating flap wave energy converters
NASA Astrophysics Data System (ADS)
Abu Bakar, Mohd Aftar; Green, David A.; Metcalfe, Andrew V.; Najafian, G.
2013-04-01
Waves offer an attractive source of renewable energy, with relatively low environmental impact, for communities reasonably close to the sea. Two types of simple wave energy converters (WEC), the heaving buoy WEC and the oscillating flap WEC, are studied. Both WECs are considered as simple energy converters because they can be modelled, to a first approximation, as single degree of freedom linear dynamic systems. In this study, we estimate the response of both WECs to typical wave inputs; wave height for the buoy and corresponding wave surge for the flap, using spectral methods. A nonlinear model of the oscillating flap WEC that includes the drag force, modelled by the Morison equation is also considered. The response to a surge input is estimated by discrete time simulation (DTS), using central difference approximations to derivatives. This is compared with the response of the linear model obtained by DTS and also validated using the spectral method. Bendat's nonlinear system identification (BNLSI) technique was used to analyze the nonlinear dynamic system since the spectral analysis was only suitable for linear dynamic system. The effects of including the nonlinear term are quantified.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
Simulations of Coherent Synchrotron Radiation Effects in Electron Machines
NASA Astrophysics Data System (ADS)
Migliorati, M.; Schiavi, A.; Dattoli, G.
2007-09-01
Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.
Simulations of Coherent Synchrotron Radiation Effects in Electron Machines
NASA Astrophysics Data System (ADS)
Migliorati, M.; Schiavi, A.; Dattoli, G.
Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.
High fidelity, radiation tolerant analog-to-digital converters
NASA Technical Reports Server (NTRS)
Wang, Charles Chang-I (Inventor); Linscott, Ivan Richard (Inventor); Inan, Umran S. (Inventor)
2012-01-01
Techniques for an analog-to-digital converter (ADC) using pipeline architecture includes a linearization technique for a spurious-free dynamic range (SFDR) over 80 deciBels. In some embodiments, sampling rates exceed a megahertz. According to a second approach, a switched-capacitor circuit is configured for correct operation in a high radiation environment. In one embodiment, the combination yields high fidelity ADC (>88 deciBel SFDR) while sampling at 5 megahertz sampling rates and consuming <60 milliWatts. Furthermore, even though it is manufactured in a commercial 0.25-.mu.m CMOS technology (1 .mu.m=12.sup.-6 meters), it maintains this performance in harsh radiation environments. Specifically, the stated performance is sustained through a highest tested 2 megarad(Si) total dose, and the ADC displays no latchup up to a highest tested linear energy transfer of 63 million electron Volts square centimeters per milligram at elevated temperature (131 degrees C.) and supply (2.7 Volts, versus 2.5 Volts nominal).
Physical aging effects on the compressive linear viscoelastic creep of IM7/K3B composite
NASA Technical Reports Server (NTRS)
Veazie, David R.; Gates, Thomas S.
1995-01-01
An experimental study was undertaken to establish the viscoelastic behavior of 1M7/K3B composite in compression at elevated temperature. Creep compliance, strain recovery and the effects of physical aging on the time dependent response was measured for uniaxial loading at several isothermal conditions below the glass transition temperature (T(g)). The IM7/K3B composite is a graphite reinforced thermoplastic polyimide with a T(g) of approximately 240 C. In a composite, the two matrix dominated compliance terms associated with time dependent behavior occur in the transverse and shear directions. Linear viscoelasticity was used to characterize the creep/recovery behavior and superposition techniques were used to establish the physical aging related material constants. Creep strain was converted to compliance and measured as a function of test time and aging time. Results included creep compliance master curves, physical aging shift factors and shift rates. The description of the unique experimental techniques required for compressive testing is also given.
NASA Technical Reports Server (NTRS)
Tag, I. A.; Lumsdaine, E.
1978-01-01
The general non-linear three-dimensional equation for acoustic potential is derived by using a perturbation technique. The linearized axisymmetric equation is then solved by using a finite element algorithm based on the Galerkin formulation for a harmonic time dependence. The solution is carried out in complex number notation for the acoustic velocity potential. Linear, isoparametric, quadrilateral elements with non-uniform distribution across the duct section are implemented. The resultant global matrix is stored in banded form and solved by using a modified Gauss elimination technique. Sound pressure levels and acoustic velocities are calculated from post element solutions. Different duct geometries are analyzed and compared with experimental results.
1987-03-31
processors . The symmetry-breaking algorithms give efficient ways to convert probabilistic algorithms to deterministic algorithms. Some of the...techniques have been applied to construct several efficient linear- processor algorithms for graph problems, including an O(lg* n)-time algorithm for (A + 1...On n-node graphs, the algorithm works in O(log 2 n) time using only n processors , in contrast to the previous best algorithm which used about n3
Polynomial elimination theory and non-linear stability analysis for the Euler equations
NASA Technical Reports Server (NTRS)
Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.
1986-01-01
Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.
Evaluation of non-intrusive flow measurement techniques for a re-entry flight experiment
NASA Technical Reports Server (NTRS)
Miles, R. B.; Santavicca, D. A.; Zimmermann, M.
1983-01-01
This study evaluates various non-intrusive techniques for the measurement of the flow field on the windward side of the Space Shuttle orbiter or a similar reentry vehicle. Included are linear (Rayleigh, Raman, Mie, Laser Doppler Velocimetry, Resonant Doppler Velocimetry) and nonlinear (Coherent Anti-Stokes Raman, Laser-Induced Fluorescence) light scattering, electron-beam fluorescence, thermal emission, and mass spectroscopy. Flow-field properties were taken from a nonequilibrium flow model by Shinn, Moss, and Simmonds at the NASA Langley Research Center. Conclusions are, when possible, based on quantitative scaling of known laboratory results to the conditions projected. Detailed discussion with researchers in the field contributed further to these conclusions and provided valuable insights regarding the experimental feasibility of each of the techniques.
Monitoring temperatures in coal conversion and combustion processes via ultrasound
NASA Astrophysics Data System (ADS)
Gopalsami, N.; Raptis, A. C.; Mulcahey, T. P.
1980-02-01
The state of the art of instrumentation for monitoring temperatures in coal conversion and combustion systems is examined. The instrumentation types studied include thermocouples, radiation pyrometers, and acoustical thermometers. The capabilities and limitations of each type are reviewed. A feasibility study of the ultrasonic thermometry is described. A mathematical model of a pulse-echo ultrasonic temperature measurement system is developed using linear system theory. The mathematical model lends itself to the adaptation of generalized correlation techniques for the estimation of propagation delays. Computer simulations are made to test the efficacy of the signal processing techniques for noise-free as well as noisy signals. Based on the theoretical study, acoustic techniques to measure temperature in reactors and combustors are feasible.
A methodology for design of a linear referencing system for surface transportation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vonderohe, A.; Hepworth, T.
1997-06-01
The transportation community has recently placed significant emphasis on development of data models, procedural standards, and policies for management of linearly-referenced data. There is an Intelligent Transportation Systems initiative underway to create a spatial datum for location referencing in one, two, and three dimensions. Most recently, a call was made for development of a unified linear reference system to support public, private, and military surface transportation needs. A methodology for design of the linear referencing system was developed from geodetic engineering principles and techniques used for designing geodetic control networks. The method is founded upon the law of propagation ofmore » random error and the statistical analysis of systems of redundant measurements, used to produce best estimates for unknown parameters. A complete mathematical development is provided. Example adjustments of linear distance measurement systems are included. The classical orders of design are discussed with regard to the linear referencing system. A simple design example is provided. A linear referencing system designed and analyzed with this method will not only be assured of meeting the accuracy requirements of users, it will have the potential for supporting delivery of error estimates along with the results of spatial analytical queries. Modeling considerations, alternative measurement methods, implementation strategies, maintenance issues, and further research needs are discussed. Recommendations are made for further advancement of the unified linear referencing system concept.« less
Damage assessment in reinforced concrete using nonlinear vibration techniques
NASA Astrophysics Data System (ADS)
Van Den Abeele, K.; De Visscher, J.
2000-07-01
Reinforced concrete (RC) structures are subject to microcrack initiation and propagation at load levels far below the actual failure load. In this paper, nonlinear vibration techniques are applied to investigate stages of progressive damage in RC beams induced by static loading tests. At different levels of damage, a modal analysis is carried out, assuming the structure to behave linearly. At the same time, measurement of resonant frequencies and damping ratios as function of vibration amplitude are performed using a frequency domain technique as well as a time domain technique. We compare the results of the linear and nonlinear techniques, and value them against the visual damage evaluation.
Hamrin Senorski, Eric; Sundemo, David; Murawski, Christopher D; Alentorn-Geli, Eduard; Musahl, Volker; Fu, Freddie; Desai, Neel; Stålman, Anders; Samuelsson, Kristian
2017-12-01
The purpose of this study was to investigate how different techniques of single-bundle anterior cruciate ligament (ACL) reconstruction affect subjective knee function via the Knee injury and Osteoarthritis Outcome Score (KOOS) evaluation 2 years after surgery. It was hypothesized that the surgical techniques of single-bundle ACL reconstruction would result in equivalent results with respect to subjective knee function 2 years after surgery. This cohort study was based on data from the Swedish National Knee Ligament Register during the 10-year period of 1 January 2005 through 31 December 2014. Patients who underwent primary single-bundle ACL reconstruction with hamstrings tendon autograft were included. Details on surgical technique were collected using a web-based questionnaire comprised of essential AARSC items, including utilization of accessory medial portal drilling, anatomic tunnel placement, and visualization of insertion sites and landmarks. A repeated measures ANOVA and an additional linear mixed model analysis were used to investigate the effect of surgical technique on the KOOS 4 from the pre-operative period to 2-year follow-up. A total of 13,636 patients who had undergone single-bundle ACL reconstruction comprised the study group for this analysis. A repeated measures ANOVA determined that mean subjective knee function differed between the pre-operative time period and at 2-year follow-up (p < 0.001). No differences were found with respect to the interaction between KOOS 4 and surgical technique or gender. Additionally, the linear mixed model adjusted for age at reconstruction, gender, and concomitant injuries showed no difference between surgical techniques in KOOS 4 improvement from baseline to 2-year follow-up. However, KOOS 4 improved significantly in patients for all surgical techniques of single-bundle ACL reconstruction (p < 0.001); the largest improvement was seen between the pre-operative time period and at 1-year follow-up. Surgical techniques of primary single-bundle ACL reconstruction did not demonstrate differences in the improvement in baseline subjective knee function as measured with the KOOS 4 during the first 2 years after surgery. However, subjective knee function improved from pre-operative baseline to 2-year follow-up independently of surgical technique.
NASA Astrophysics Data System (ADS)
Sabatini, Roberto; Richardson, Mark
2013-03-01
Novel techniques for laser beam atmospheric extinction measurements, suitable for several air and space platform applications, are presented in this paper. Extinction measurements are essential to support the engineering development and the operational employment of a variety of aerospace electro-optical sensor systems, allowing calculation of the range performance attainable with such systems in current and likely future applications. Such applications include ranging, weaponry, Earth remote sensing and possible planetary exploration missions performed by satellites and unmanned flight vehicles. Unlike traditional LIDAR methods, the proposed techniques are based on measurements of the laser energy (intensity and spatial distribution) incident on target surfaces of known geometric and reflective characteristics, by means of infrared detectors and/or infrared cameras calibrated for radiance. Various laser sources can be employed with wavelengths from the visible to the far infrared portions of the spectrum, allowing for data correlation and extended sensitivity. Errors affecting measurements performed using the proposed methods are discussed in the paper and algorithms are proposed that allow a direct determination of the atmospheric transmittance and spatial characteristics of the laser spot. These algorithms take into account a variety of linear and non-linear propagation effects. Finally, results are presented relative to some experimental activities performed to validate the proposed techniques. Particularly, data are presented relative to both ground and flight trials performed with laser systems operating in the near infrared (NIR) at λ= 1064 nm and λ= 1550 nm. This includes ground tests performed with 10 Hz and 20 KHz PRF NIR laser systems in a large variety of atmospheric conditions, and flight trials performed with a 10 Hz airborne NIR laser system installed on a TORNADO aircraft, flying up to altitudes of 22,000 ft.
Novel atmospheric extinction measurement techniques for aerospace laser system applications
NASA Astrophysics Data System (ADS)
Sabatini, Roberto; Richardson, Mark
2013-01-01
Novel techniques for laser beam atmospheric extinction measurements, suitable for manned and unmanned aerospace vehicle applications, are presented in this paper. Extinction measurements are essential to support the engineering development and the operational employment of a variety of aerospace electro-optical sensor systems, allowing calculation of the range performance attainable with such systems in current and likely future applications. Such applications include ranging, weaponry, Earth remote sensing and possible planetary exploration missions performed by satellites and unmanned flight vehicles. Unlike traditional LIDAR methods, the proposed techniques are based on measurements of the laser energy (intensity and spatial distribution) incident on target surfaces of known geometric and reflective characteristics, by means of infrared detectors and/or infrared cameras calibrated for radiance. Various laser sources can be employed with wavelengths from the visible to the far infrared portions of the spectrum, allowing for data correlation and extended sensitivity. Errors affecting measurements performed using the proposed methods are discussed in the paper and algorithms are proposed that allow a direct determination of the atmospheric transmittance and spatial characteristics of the laser spot. These algorithms take into account a variety of linear and non-linear propagation effects. Finally, results are presented relative to some experimental activities performed to validate the proposed techniques. Particularly, data are presented relative to both ground and flight trials performed with laser systems operating in the near infrared (NIR) at λ = 1064 nm and λ = 1550 nm. This includes ground tests performed with 10 Hz and 20 kHz PRF NIR laser systems in a large variety of atmospheric conditions, and flight trials performed with a 10 Hz airborne NIR laser system installed on a TORNADO aircraft, flying up to altitudes of 22,000 ft.
On statistical inference in time series analysis of the evolution of road safety.
Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora
2013-11-01
Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Computation of the three-dimensional medial surface dynamics of the vocal folds.
Döllinger, Michael; Berry, David A
2006-01-01
To increase our understanding of pathological and healthy voice production, quantitative measurement of the medial surface dynamics of the vocal folds is significant, albeit rarely performed because of the inaccessibility of the vocal folds. Using an excised hemilarynx methodology, a new calibration technique, herein referred to as the linear approximate (LA) method, was introduced to compute the three-dimensional coordinates of fleshpoints along the entire medial surface of the vocal fold. The results were compared with results from the direct linear transform. An associated error estimation was presented, demonstrating the improved accuracy of the new method. A test on real data was reported including computation of quantitative measurements of vocal fold dynamics.
The use and misuse of statistical analyses. [in geophysics and space physics
NASA Technical Reports Server (NTRS)
Reiff, P. H.
1983-01-01
The statistical techniques most often used in space physics include Fourier analysis, linear correlation, auto- and cross-correlation, power spectral density, and superposed epoch analysis. Tests are presented which can evaluate the significance of the results obtained through each of these. Data presented without some form of error analysis are frequently useless, since they offer no way of assessing whether a bump on a spectrum or on a superposed epoch analysis is real or merely a statistical fluctuation. Among many of the published linear correlations, for instance, the uncertainty in the intercept and slope is not given, so that the significance of the fitted parameters cannot be assessed.
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Kuo, C. Y.
1979-01-01
The objective of this paper is to define optical physics and/or environmental conditions under which the linear multiple-regression should be applicable. An investigation of the signal-response equations is conducted and the concept is tested by application to actual remote sensing data from a laboratory experiment performed under controlled conditions. Investigation of the signal-response equations shows that the exact solution for a number of optical physics conditions is of the same form as a linearized multiple-regression equation, even if nonlinear contributions from surface reflections, atmospheric constituents, or other water pollutants are included. Limitations on achieving this type of solution are defined.
Precision measurements of linear scattering density using muon tomography
NASA Astrophysics Data System (ADS)
Åström, E.; Bonomi, G.; Calliari, I.; Calvini, P.; Checchia, P.; Donzella, A.; Faraci, E.; Forsberg, F.; Gonella, F.; Hu, X.; Klinger, J.; Sundqvist Ökvist, L.; Pagano, D.; Rigoni, A.; Ramous, E.; Urbani, M.; Vanini, S.; Zenoni, A.; Zumerle, G.
2016-07-01
We demonstrate that muon tomography can be used to precisely measure the properties of various materials. The materials which have been considered have been extracted from an experimental blast furnace, including carbon (coke) and iron oxides, for which measurements of the linear scattering density relative to the mass density have been performed with an absolute precision of 10%. We report the procedures that are used in order to obtain such precision, and a discussion is presented to address the expected performance of the technique when applied to heavier materials. The results we obtain do not depend on the specific type of material considered and therefore they can be extended to any application.
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
A parametric LQ approach to multiobjective control system design
NASA Technical Reports Server (NTRS)
Kyr, Douglas E.; Buchner, Marc
1988-01-01
The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.
A review on prognostic techniques for non-stationary and non-linear rotating systems
NASA Astrophysics Data System (ADS)
Kan, Man Shan; Tan, Andy C. C.; Mathew, Joseph
2015-10-01
The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.
Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.
NASA Astrophysics Data System (ADS)
Dattoli, G.; Migliorati, M.; Schiavi, A.
2007-05-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.
Preparation and Elastic Moduli of Germanate Glass Containing Lead and Bismuth
Sidek, Hj A. A.; Bahari, Hamid R.; Halimah, Mohamed K.; Yunus, Wan M. M.
2012-01-01
This paper reports the rapid melt quenching technique preparation for the new family of bismuth-lead germanate glass (BPG) systems in the form of (GeO2)60–(PbO)40−x–(½Bi2O3)x where x = 0 to 40 mol%. Their densities with respect of Bi2O3 concentration were determined using Archimedes’ method with acetone as a floatation medium. The current experimental data are compared with those of bismuth lead borate (B2O3)20–(PbO)80−x–(Bi2O3)x. The elastic properties of BPG were studied using the ultrasonic pulse-echo technique where both longitudinal and transverse sound wave velocities have been measured in each glass samples at a frequency of 15 MHz and at room temperature. Experimental data shows that all the physical parameters of BPG including density and molar volume, both longitudinal and transverse velocities increase linearly with increasing of Bi2O3 content in the germanate glass network. Their elastic moduli such as longitudinal, shear and Young’s also increase linearly with addition of Bi2O3 but the bulk modulus did not. The Poisson’s ratio and fractal dimensionality are also found to vary linearly with the Bi2O3 concentration. PMID:22606000
Volumetric Verification of Multiaxis Machine Tool Using Laser Tracker
Aguilar, Juan José
2014-01-01
This paper aims to present a method of volumetric verification in machine tools with linear and rotary axes using a laser tracker. Beyond a method for a particular machine, it presents a methodology that can be used in any machine type. Along this paper, the schema and kinematic model of a machine with three axes of movement, two linear and one rotational axes, including the measurement system and the nominal rotation matrix of the rotational axis are presented. Using this, the machine tool volumetric error is obtained and nonlinear optimization techniques are employed to improve the accuracy of the machine tool. The verification provides a mathematical, not physical, compensation, in less time than other methods of verification by means of the indirect measurement of geometric errors of the machine from the linear and rotary axes. This paper presents an extensive study about the appropriateness and drawbacks of the regression function employed depending on the types of movement of the axes of any machine. In the same way, strengths and weaknesses of measurement methods and optimization techniques depending on the space available to place the measurement system are presented. These studies provide the most appropriate strategies to verify each machine tool taking into consideration its configuration and its available work space. PMID:25202744
Preparation and elastic moduli of germanate glass containing lead and bismuth.
Sidek, Hj A A; Bahari, Hamid R; Halimah, Mohamed K; Yunus, Wan M M
2012-01-01
This paper reports the rapid melt quenching technique preparation for the new family of bismuth-lead germanate glass (BPG) systems in the form of (GeO(2))(60)-(PbO)(40-) (x)-(½Bi(2)O(3))(x) where x = 0 to 40 mol%. Their densities with respect of Bi(2)O(3) concentration were determined using Archimedes' method with acetone as a floatation medium. The current experimental data are compared with those of bismuth lead borate (B(2)O(3))(20)-(PbO)(80-) (x)-(Bi(2)O(3))(x). The elastic properties of BPG were studied using the ultrasonic pulse-echo technique where both longitudinal and transverse sound wave velocities have been measured in each glass samples at a frequency of 15 MHz and at room temperature. Experimental data shows that all the physical parameters of BPG including density and molar volume, both longitudinal and transverse velocities increase linearly with increasing of Bi(2)O(3) content in the germanate glass network. Their elastic moduli such as longitudinal, shear and Young's also increase linearly with addition of Bi(2)O(3) but the bulk modulus did not. The Poisson's ratio and fractal dimensionality are also found to vary linearly with the Bi(2)O(3) concentration.
An SVM-Based Solution for Fault Detection in Wind Turbines
Santos, Pedro; Villa, Luisa F.; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús
2015-01-01
Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets. PMID:25760051
Advanced intensity-modulation continuous-wave lidar techniques for ASCENDS CO2 column measurements
NASA Astrophysics Data System (ADS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. W.; Obland, Michael D.; Meadows, Byron
2015-10-01
Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.
Advanced IMCW Lidar Techniques for ASCENDS CO2 Column Measurements
NASA Astrophysics Data System (ADS)
Campbell, Joel; lin, bing; nehrir, amin; harrison, fenton; obland, michael
2015-04-01
Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation.
Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for ASCENDS O2 Column Measurements
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Meadows, Byron
2015-01-01
Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.
Linear increases in carbon nanotube density through multiple transfer technique.
Shulaker, Max M; Wei, Hai; Patil, Nishant; Provine, J; Chen, Hong-Yu; Wong, H-S P; Mitra, Subhasish
2011-05-11
We present a technique to increase carbon nanotube (CNT) density beyond the as-grown CNT density. We perform multiple transfers, whereby we transfer CNTs from several growth wafers onto the same target surface, thereby linearly increasing CNT density on the target substrate. This process, called transfer of nanotubes through multiple sacrificial layers, is highly scalable, and we demonstrate linear CNT density scaling up to 5 transfers. We also demonstrate that this linear CNT density increase results in an ideal linear increase in drain-source currents of carbon nanotube field effect transistors (CNFETs). Experimental results demonstrate that CNT density can be improved from 2 to 8 CNTs/μm, accompanied by an increase in drain-source CNFET current from 4.3 to 17.4 μA/μm.
NASA Astrophysics Data System (ADS)
Sapia, Mark Angelo
2000-11-01
Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Evolution from MEMS-based Linear Drives to Bio-based Nano Drives
NASA Astrophysics Data System (ADS)
Fujita, Hiroyuki
The successful extension of semiconductor technology to fabricate mechanical parts of the sizes from 10 to 100 micrometers opened wide ranges of possibilities for micromechanical devices and systems. The fabrication technique is called micromachining. Micromachining processes are based on silicon integrated circuits (IC) technology and used to build three-dimensional structures and movable parts by the combination of lithography, etching, film deposition, and wafer bonding. Microactuators are the key devices allowing MEMS to perform physical functions. Some of them are driven by electric, magnetic, and fluidic forces. Some others utilize actuator materials including piezoelectric (PZT, ZnO, quartz) and magnetostrictive materials (TbFe), shape memory alloy (TiNi) and bio molecular motors. This paper deals with the development of MEMS based microactuators, especially linear drives, following my own research experience. They include an electrostatic actuator, a superconductive levitated actuator, arrayed actuators, and a bio-motor-driven actuator.
Liquid electrolyte informatics using an exhaustive search with linear regression.
Sodeyama, Keitaro; Igarashi, Yasuhiko; Nakayama, Tomofumi; Tateyama, Yoshitaka; Okada, Masato
2018-06-14
Exploring new liquid electrolyte materials is a fundamental target for developing new high-performance lithium-ion batteries. In contrast to solid materials, disordered liquid solution properties have been less studied by data-driven information techniques. Here, we examined the estimation accuracy and efficiency of three information techniques, multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), and exhaustive search with linear regression (ES-LiR), by using coordination energy and melting point as test liquid properties. We then confirmed that ES-LiR gives the most accurate estimation among the techniques. We also found that ES-LiR can provide the relationship between the "prediction accuracy" and "calculation cost" of the properties via a weight diagram of descriptors. This technique makes it possible to choose the balance of the "accuracy" and "cost" when the search of a huge amount of new materials was carried out.
Preface: Introductory Remarks: Linear Scaling Methods
NASA Astrophysics Data System (ADS)
Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.
2008-07-01
It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up implementation questions relating to parallelization (particularly with multi-core processors starting to dominate the market) and inherent scaling and basis sets (in both normal and linear scaling codes). For now, the answer seems to lie between 100-1,000 atoms, though this depends on the type of simulation used among other factors. Basis sets are still a problematic question in the area of electronic structure calculations. The linear scaling community has largely split into two camps: those using relatively small basis sets based on local atomic-like functions (where systematic convergence to the full basis set limit is hard to achieve); and those that use necessarily larger basis sets which allow convergence systematically and therefore are the localised equivalent of plane waves. Related to basis sets is the study of Wannier functions, on which some linear scaling methods are based and which give a good point of contact with traditional techniques; they are particularly interesting for modelling unoccupied states with linear scaling methods. There are, of course, as many approaches to linear scaling solution for the density matrix as there are groups in the area, though there are various broad areas: McWeeny-based methods, fragment-based methods, recursion methods, and combinations of these. While many ideas have been in development for several years, there are still improvements emerging, as shown by the rich variety of the talks below. Applications using O(N) DFT methods are now starting to emerge, though they are still clearly not trivial. Once systems to be simulated cross the 10,000 atom barrier, only linear scaling methods can be applied, even with the most efficient standard techniques. One of the most challenging problems remaining, now that ab initio methods can be applied to large systems, is the long timescale problem. Although much of the work presented was concerned with improving the performance of the codes, and applying them to scientificallyimportant problems, there was another important theme: extending functionality. The search for greater accuracy has given an implementation of density functional designed to model van der Waals interactions accurately as well as local correlation, TDDFT and QMC and GW methods which, while not explicitly O(N), take advantage of localisation. All speakers at the workshop were invited to contribute to this issue, but not all were able to do this. Hence it is useful to give a complete list of the talks presented, with the names of the sessions; however, many talks fell within more than one area. This is an exciting time for linear scaling methods, which are already starting to contribute significantly to important scientific problems. Applications to nanostructures and biomolecules A DFT study on the structural stability of Ge 3D nanostructures on Si(001) using CONQUEST Tsuyoshi Miyazaki, D R Bowler, M J Gillan, T Otsuka and T Ohno Large scale electronic structure calculation theory and several applications Takeo Fujiwara and Takeo Hoshi ONETEP:Linear-scaling DFT with plane waves Chris-Kriton Skylaris, Peter D Haynes, Arash A Mostofi, Mike C Payne Maximally-localised Wannier functions as building blocks for large-scale electronic structure calculations Arash A Mostofi and Nicola Marzari A linear scaling three dimensional fragment method for ab initio calculations Lin-Wang Wang, Zhengji Zhao, Juan Meza Peta-scalable reactive Molecular dynamics simulation of mechanochemical processes Aiichiro Nakano, Rajiv K. Kalia, Ken-ichi Nomura, Fuyuki Shimojo and Priya Vashishta Recent developments and applications of the real-space multigrid (RMG) method Jerzy Bernholc, M Hodak, W Lu, and F Ribeiro Energy minimisation functionals and algorithms CONQUEST: A linear scaling DFT Code David R Bowler, Tsuyoshi Miyazaki, Antonio Torralba, Veronika Brazdova, Milica Todorovic, Takao Otsuka and Mike Gillan Kernel optimisation and the physical significance of optimised local orbitals in the ONETEP code Peter Haynes, Chris-Kriton Skylaris, Arash Mostofi and Mike Payne A miscellaneous overview of SIESTA algorithms Jose M Soler Wavelets as a basis set for electronic structure calculations and electrostatic problems Stefan Goedecker Wavelets as a basis set for linear scaling electronic structure calculationsMark Rayson O(N) Krylov subspace method for large-scale ab initio electronic structure calculations Taisuke Ozaki Linear scaling calculations with the divide-and-conquer approach and with non-orthogonal localized orbitals Weitao Yang Toward efficient wavefunction based linear scaling energy minimization Valery Weber Accurate O(N) first-principles DFT calculations using finite differences and confined orbitals Jean-Luc Fattebert Linear-scaling methods in dynamics simulations or beyond DFT and ground state properties An O(N) time-domain algorithm for TDDFT Guan Hua Chen Local correlation theory and electronic delocalization Joseph Subotnik Ab initio molecular dynamics with linear scaling: foundations and applications Eiji Tsuchida Towards a linear scaling Car-Parrinello-like approach to Born-Oppenheimer molecular dynamics Thomas Kühne, Michele Ceriotti, Matthias Krack and Michele Parrinello Partial linear scaling for quantum Monte Carlo calculations on condensed matter Mike Gillan Exact embedding of local defects in crystals using maximally localized Wannier functions Eric Cancès Faster GW calculations in larger model structures using ultralocalized nonorthogonal Wannier functions Paolo Umari Other approaches for linear-scaling, including methods formetals Partition-of-unity finite element method for large, accurate electronic-structure calculations of metals John E Pask and Natarajan Sukumar Semiclassical approach to density functional theory Kieron Burke Ab initio transport calculations in defected carbon nanotubes using O(N) techniques Blanca Biel, F J Garcia-Vidal, A Rubio and F Flores Large-scale calculations with the tight-binding (screened) KKR method Rudolf Zeller Acknowledgments We gratefully acknowledge funding for the workshop from the UK CCP9 network, CECAM and the ESF through the PsiK network. DRB, PDH and CKS are funded by the Royal Society. References [1] Car R and Parrinello M 1985 Phys. Rev. Lett. 55 2471 [2] Kühne T D, Krack M, Mohamed F R and Parrinello M 2007 Phys. Rev. Lett. 98 066401 [3] Goedecker S 1999 Rev. Mod. Phys. 71 1085
USDA-ARS?s Scientific Manuscript database
The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...
Development of a CFD Code for Analysis of Fluid Dynamic Forces in Seals
NASA Technical Reports Server (NTRS)
Athavale, Mahesh M.; Przekwas, Andrzej J.; Singhal, Ashok K.
1991-01-01
The aim is to develop a 3-D computational fluid dynamics (CFD) code for the analysis of fluid flow in cylindrical seals and evaluation of the dynamic forces on the seals. This code is expected to serve as a scientific tool for detailed flow analysis as well as a check for the accuracy of the 2D industrial codes. The features necessary in the CFD code are outlined. The initial focus was to develop or modify and implement new techniques and physical models. These include collocated grid formulation, rotating coordinate frames and moving grid formulation. Other advanced numerical techniques include higher order spatial and temporal differencing and an efficient linear equation solver. These techniques were implemented in a 2D flow solver for initial testing. Several benchmark test cases were computed using the 2D code, and the results of these were compared to analytical solutions or experimental data to check the accuracy. Tests presented here include planar wedge flow, flow due to an enclosed rotor, and flow in a 2D seal with a whirling rotor. Comparisons between numerical and experimental results for an annular seal and a 7-cavity labyrinth seal are also included.
NASA Astrophysics Data System (ADS)
Shibata, Hisaichi; Takaki, Ryoji
2017-11-01
A novel method to compute current-voltage characteristics (CVCs) of direct current positive corona discharges is formulated based on a perturbation technique. We use linearized fluid equations coupled with the linearized Poisson's equation. Townsend relation is assumed to predict CVCs apart from the linearization point. We choose coaxial cylinders as a test problem, and we have successfully predicted parameters which can determine CVCs with arbitrary inner and outer radii. It is also confirmed that the proposed method essentially does not induce numerical instabilities.
Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.
2016-01-01
Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624
Evolution of treatment of fistula in ano.
Blumetti, J; Abcarian, A; Quinteros, F; Chaudhry, V; Prasad, L; Abcarian, H
2012-05-01
Fistula-in-ano is a common medical problem affecting thousands of patients annually. In the past, the options for treatment of fistula-in-ano were limited to fistulotomy and/or seton placement. Current treatment options also include muscle-sparing techniques such as a dermal island flap, endorectal advancement flap, fibrin sealent injection, anal fistula plug, and most recently ligation of the intersphincteric fistula tract (procedure). This study seeks to evaluate types and time trends for treatment of fistula-in-ano. A retrospective review from 1975 to 2009 was performed. Data were collected and sorted into 5-year increments for type and time trends of treatment. Fistulotomy and partial fistulotomy were grouped as cutting procedures. Seton placement, fibrin sealant, dermal flap, endorectal flap, and fistula plug were grouped as noncutting procedures. Statistical analysis was performed for each time period to determine trends. With institutional review board approval, the records of 2,267 fistula operations available for analysis were included. Most of the patients were men (74 vs. 26%). Cutting procedures comprised 66.6% (n = 1510) of all procedures. Noncutting procedures were utilized in 33.4% (n = 757), including Seton placement alone 370 (16.3%), fibrin sealant 168 (7.4%), dermal or endorectal flap 147 (6.5%), and fistula plug 72 (3.2%). The distribution of operations grouped in 5-year intervals is as follows: 1975-1979, 78 cutting and one noncutting; 1980-1984, 170 cutting and 10 noncutting; 1985-1989, 54 cutting and five noncutting; 1990-1994, 37 cutting and six noncutting; 1995-1999, 367 cutting and 167 noncutting; 2000-2004, 514 cutting and 283 noncutting; 2005-2009, 290 cutting and 285 noncutting. The percentage of cutting and noncutting procedures significantly differed over time, with cutting procedures decreasing and noncutting procedures increasing proportionally (χ(2) linear-by-linear association, p < 0.05). Fistula-in-ano remains a common complex disease process. Its treatment has evolved to include a variety of noncutting techniques in addition to traditional fistulotomy. With the advent of more sphincter-sparing techniques, the number of patients undergoing fistulotomy should continue to decrease over time. Surgeons should become familiar with various surgical techniques so the treatment can be tailored to the patient.
NASA Astrophysics Data System (ADS)
Morén, B.; Larsson, T.; Carlsson Tedgren, Å.
2018-03-01
High dose-rate brachytherapy is a method for cancer treatment where the radiation source is placed within the body, inside or close to a tumour. For dose planning, mathematical optimization techniques are being used in practice and the most common approach is to use a linear model which penalizes deviations from specified dose limits for the tumour and for nearby organs. This linear penalty model is easy to solve, but its weakness lies in the poor correlation of its objective value and the dose-volume objectives that are used clinically to evaluate dose distributions. Furthermore, the model contains parameters that have no clear clinical interpretation. Another approach for dose planning is to solve mixed-integer optimization models with explicit dose-volume constraints which include parameters that directly correspond to dose-volume objectives, and which are therefore tangible. The two mentioned models take the overall goals for dose planning into account in fundamentally different ways. We show that there is, however, a mathematical relationship between them by deriving a linear penalty model from a dose-volume model. This relationship has not been established before and improves the understanding of the linear penalty model. In particular, the parameters of the linear penalty model can be interpreted as dual variables in the dose-volume model.
Reduced basis technique for evaluating the sensitivity coefficients of the nonlinear tire response
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.
1992-01-01
An efficient reduced-basis technique is proposed for calculating the sensitivity of nonlinear tire response to variations in the design variables. The tire is modeled using a 2-D, moderate rotation, laminated anisotropic shell theory, including the effects of variation in material and geometric parameters. The vector of structural response and its first-order and second-order sensitivity coefficients are each expressed as a linear combination of a small number of basis vectors. The effectiveness of the basis vectors used in approximating the sensitivity coefficients is demonstrated by a numerical example involving the Space Shuttle nose-gear tire, which is subjected to uniform inflation pressure.
Analysis and synthesis of distributed-lumped-active networks by digital computer
NASA Technical Reports Server (NTRS)
1973-01-01
The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.
A simple white noise analysis of neuronal light responses.
Chichilnisky, E J
2001-05-01
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
NASA Astrophysics Data System (ADS)
Lucifredi, A.; Mazzieri, C.; Rossi, M.
2000-05-01
Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.
Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements
Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.
2012-01-01
This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962
Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.
Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M
2012-01-01
This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.
NASA Astrophysics Data System (ADS)
Moeferdt, Matthias; Kiel, Thomas; Sproll, Tobias; Intravaia, Francesco; Busch, Kurt
2018-02-01
A combined analytical and numerical study of the modes in two distinct plasmonic nanowire systems is presented. The computations are based on a discontinuous Galerkin time-domain approach, and a fully nonlinear and nonlocal hydrodynamic Drude model for the metal is utilized. In the linear regime, these computations demonstrate the strong influence of nonlocality on the field distributions as well as on the scattering and absorption spectra. Based on these results, second-harmonic-generation efficiencies are computed over a frequency range that covers all relevant modes of the linear spectra. In order to interpret the physical mechanisms that lead to corresponding field distributions, the associated linear quasielectrostatic problem is solved analytically via conformal transformation techniques. This provides an intuitive classification of the linear excitations of the systems that is then applied to the full Maxwell case. Based on this classification, group theory facilitates the determination of the selection rules for the efficient excitation of modes in both the linear and nonlinear regimes. This leads to significantly enhanced second-harmonic generation via judiciously exploiting the system symmetries. These results regarding the mode structure and second-harmonic generation are of direct relevance to other nanoantenna systems.
Seitz, Kelsey E; Smith, Cynthia R; Marks, Stanley L; Venn-Watson, Stephanie K; Ivančić, Marina
2016-12-01
The objective of this study was to establish a comprehensive technique for ultrasound examination of the dolphin hepatobiliary system and apply this technique to 30 dolphins to determine what, if any, sonographic changes are associated with blood-based indicators of metabolic syndrome (insulin greater than 14 μIU/ml or glucose greater than 112 mg/dl) and iron overload (transferrin saturation greater than 65%). A prospective study of individuals in a cross-sectional population with and without elevated postprandial insulin levels was performed. Twenty-nine bottlenose dolphins ( Tursiops truncatus ) in a managed collection were included in the final data analysis. An in-water ultrasound technique was developed that included detailed analysis of the liver and pancreas. Dolphins with hyperinsulinemia concentrations had larger livers compared with dolphins with nonelevated concentrations. Using stepwise, multivariate regression including blood-based indicators of metabolic syndrome in dolphins, glucose was the best predictor of and had a positive linear association with liver size (P = 0.007, R 2 = 0.24). Bottlenose dolphins are susceptible to metabolic syndrome and associated complications that affect the liver, including fatty liver disease and iron overload. This study facilitated the establishment of a technique for a rapid, diagnostic, and noninvasive ultrasonographic evaluation of the dolphin liver. In addition, the study identified ultrasound-detectable hepatic changes associated primarily with elevated glucose concentration in dolphins. Future investigations will strive to detail the pathophysiological mechanisms for these changes.
Spectral Target Detection using Schroedinger Eigenmaps
NASA Astrophysics Data System (ADS)
Dorado-Munoz, Leidy P.
Applications of optical remote sensing processes include environmental monitoring, military monitoring, meteorology, mapping, surveillance, etc. Many of these tasks include the detection of specific objects or materials, usually few or small, which are surrounded by other materials that clutter the scene and hide the relevant information. This target detection process has been boosted lately by the use of hyperspectral imagery (HSI) since its high spectral dimension provides more detailed spectral information that is desirable in data exploitation. Typical spectral target detectors rely on statistical or geometric models to characterize the spectral variability of the data. However, in many cases these parametric models do not fit well HSI data that impacts the detection performance. On the other hand, non-linear transformation methods, mainly based on manifold learning algorithms, have shown a potential use in HSI transformation, dimensionality reduction and classification. In target detection, non-linear transformation algorithms are used as preprocessing techniques that transform the data to a more suitable lower dimensional space, where the statistical or geometric detectors are applied. One of these non-linear manifold methods is the Schroedinger Eigenmaps (SE) algorithm that has been introduced as a technique for semi-supervised classification. The core tool of the SE algorithm is the Schroedinger operator that includes a potential term that encodes prior information about the materials present in a scene, and enables the embedding to be steered in some convenient directions in order to cluster similar pixels together. A completely novel target detection methodology based on SE algorithm is proposed for the first time in this thesis. The proposed methodology does not just include the transformation of the data to a lower dimensional space but also includes the definition of a detector that capitalizes on the theory behind SE. The fact that target pixels and those similar pixels are clustered in a predictable region of the low-dimensional representation is used to define a decision rule that allows one to identify target pixels over the rest of pixels in a given image. In addition, a knowledge propagation scheme is used to combine spectral and spatial information as a means to propagate the "potential constraints" to nearby points. The propagation scheme is introduced to reinforce weak connections and improve the separability between most of the target pixels and the background. Experiments using different HSI data sets are carried out in order to test the proposed methodology. The assessment is performed from a quantitative and qualitative point of view, and by comparing the SE-based methodology against two other detection methodologies that use linear/non-linear algorithms as transformations and the well-known Adaptive Coherence/Cosine Estimator (ACE) detector. Overall results show that the SE-based detector outperforms the other two detection methodologies, which indicates the usefulness of the SE transformation in spectral target detection problems.
Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.
Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian
2018-05-23
Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.
Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations
2008-02-01
Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates
A Technique of Treating Negative Weights in WENO Schemes
NASA Technical Reports Server (NTRS)
Shi, Jing; Hu, Changqing; Shu, Chi-Wang
2000-01-01
High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.
LINEAR AND NONLINEAR CORRECTIONS IN THE RHIC INTERACTION REGIONS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
PILAT,F.; CAMERON,P.; PTITSYN,V.
2002-06-02
A method has been developed to measure operationally the linear and non-linear effects of the interaction region triplets, that gives access to the multipole content through the action kick, by applying closed orbit bumps and analysing tune and orbit shifts. This technique has been extensively tested and used during the RHIC operations in 2001. Measurements were taken at 3 different interaction regions and for different focusing at the interaction point. Non-linear effects up to the dodecapole have been measured as well as the effects of linear, sextupolar and octupolar corrections. An analysis package for the data processing has been developedmore » that through a precise fit of the experimental tune shift data (measured by a phase lock loop technique to better than 10{sup -5} resolution) determines the multipole content of an IR triplet.« less
Pseudo-random number generator for the Sigma 5 computer
NASA Technical Reports Server (NTRS)
Carroll, S. N.
1983-01-01
A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.
[Technological innovations in radiation oncology require specific quality controls].
Lenaerts, E; Mathot, M
2014-01-01
During the last decade, the field of radiotherapy has benefited from major technological innovations and continuously improving treatment efficacy, comfort and safety of patients. This mainly concerns the imaging techniques that allow 4D CT scan recording the respiratory phases, on-board imaging on linear accelerators that ensure perfect positioning of the patient for treatment and irradiation techniques that reduce very significantly the duration of treatment sessions without compromising quality of the treatment plan, including IMRT (Intensity Modulated Radiation Therapy) and VMAT (Volumetric Modulated Arc therapy). In this context of rapid technological change, it is the responsibility of medical physicists to regularly and precisely monitor the perfect functioning of new techniques to ensure patient safety. This requires the use of specific quality control equipment best suited to these new techniques. We will briefly describe the measurement system Delta4 used to control individualized treatment plan for each patient treated with VMAT technology.
Brensinger, Karen; Rollman, Christopher; Copper, Christine; Genzman, Ashton; Rine, Jacqueline; Lurie, Ira; Moini, Mehdi
2016-01-01
To address the need for the forensic analysis of high explosives, a novel capillary electrophoresis mass spectrometry (CE-MS) technique has been developed for high resolution, sensitivity, and mass accuracy detection of these compounds. The technique uses perfluorooctanoic acid (PFOA) as both a micellar electrokinetic chromatography (MEKC) reagent for separation of neutral explosives and as the complexation reagent for mass spectrometric detection of PFOA-explosive complexes in the negative ion mode. High explosives that formed complexes with PFOA included RDX, HMX, tetryl, and PETN. Some nitroaromatics were detected as molecular ions. Detection limits in the high parts per billion range and linear calibration responses over two orders of magnitude were obtained. For proof of concept, the technique was applied to the quantitative analysis of high explosives in sand samples. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
String Stability of a Linear Formation Flight Control System
NASA Technical Reports Server (NTRS)
Allen, Michael J.; Ryan, Jack; Hanson, Curtis E.; Parle, James F.
2002-01-01
String stability analysis of an autonomous formation flight system was performed using linear and nonlinear simulations. String stability is a measure of how position errors propagate from one vehicle to another in a cascaded system. In the formation flight system considered here, each i(sup th) aircraft uses information from itself and the preceding ((i-1)(sup th)) aircraft to track a commanded relative position. A possible solution for meeting performance requirements with such a system is to allow string instability. This paper explores two results of string instability and outlines analysis techniques for string unstable systems. The three analysis techniques presented here are: linear, nonlinear formation performance, and ride quality. The linear technique was developed from a worst-case scenario and could be applied to the design of a string unstable controller. The nonlinear formation performance and ride quality analysis techniques both use nonlinear formation simulation. Three of the four formation-controller gain-sets analyzed in this paper were limited more by ride quality than by performance. Formations of up to seven aircraft in a cascaded formation could be used in the presence of light gusts with this string unstable system.
Estimation of hysteretic damping of structures by stochastic subspace identification
NASA Astrophysics Data System (ADS)
Bajrić, Anela; Høgsberg, Jan
2018-05-01
Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.
Mixed H∞ and passive control for linear switched systems via hybrid control approach
NASA Astrophysics Data System (ADS)
Zheng, Qunxian; Ling, Youzhu; Wei, Lisheng; Zhang, Hongbin
2018-03-01
This paper investigates the mixed H∞ and passive control problem for linear switched systems based on a hybrid control strategy. To solve this problem, first, a new performance index is proposed. This performance index can be viewed as the mixed weighted H∞ and passivity performance. Then, the hybrid controllers are used to stabilise the switched systems. The hybrid controllers consist of dynamic output-feedback controllers for every subsystem and state updating controllers at the switching instant. The design of state updating controllers not only depends on the pre-switching subsystem and the post-switching subsystem, but also depends on the measurable output signal. The hybrid controllers proposed in this paper can include some existing ones as special cases. Combine the multiple Lyapunov functions approach with the average dwell time technique, new sufficient conditions are obtained. Under the new conditions, the closed-loop linear switched systems are globally uniformly asymptotically stable with a mixed H∞ and passivity performance index. Moreover, the desired hybrid controllers can be constructed by solving a set of linear matrix inequalities. Finally, a numerical example and a practical example are given.
Linear signal noise summer accurately determines and controls S/N ratio
NASA Technical Reports Server (NTRS)
Sundry, J. L.
1966-01-01
Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.
Pease, J M; Morselli, M F
1987-01-01
This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.
NASA Technical Reports Server (NTRS)
Cheyney, H., III; Arking, A.
1976-01-01
The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
2015-08-01
Seismic analysis of nuclear structures is routinely performed using guidance provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998).” This document, which is currently under revision, provides detailed guidance on linear seismic soil-structure-interaction (SSI) analysis of nuclear structures. To accommodate the linear analysis, soil material properties are typically developed as shear modulus and damping ratio versus cyclic shear strain amplitude. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain SSI analysis. To accommodate the nonlinear analysis, a more appropriate form of the soil material properties includes shear stressmore » and energy absorbed per cycle versus shear strain. Ideally, nonlinear soil model material properties would be established with soil testing appropriate for the nonlinear constitutive model being used. However, much of the soil testing done for SSI analysis is performed for use with linear analysis techniques. Consequently, a method is described in this paper that uses soil test data intended for linear analysis to develop nonlinear soil material properties. To produce nonlinear material properties that are equivalent to the linear material properties, the linear and nonlinear model hysteresis loops are considered. For equivalent material properties, the shear stress at peak shear strain and energy absorbed per cycle should match when comparing the linear and nonlinear model hysteresis loops. Consequently, nonlinear material properties are selected based on these criteria.« less
Image analysis software for following progression of peripheral neuropathy
NASA Astrophysics Data System (ADS)
Epplin-Zapf, Thomas; Miller, Clayton; Larkin, Sean; Hermesmeyer, Eduardo; Macy, Jenny; Pellegrini, Marco; Luccarelli, Saverio; Staurenghi, Giovanni; Holmes, Timothy
2009-02-01
A relationship has been reported by several research groups [1 - 4] between the density and shapes of nerve fibers in the cornea and the existence and severity of peripheral neuropathy. Peripheral neuropathy is a complication of several prevalent diseases or conditions, which include diabetes, HIV, prolonged alcohol overconsumption and aging. A common clinical technique for confirming the condition is intramuscular electromyography (EMG), which is invasive, so a noninvasive technique like the one proposed here carries important potential advantages for the physician and patient. A software program that automatically detects the nerve fibers, counts them and measures their shapes is being developed and tested. Tests were carried out with a database of subjects with levels of severity of diabetic neuropathy as determined by EMG testing. Results from this testing, that include a linear regression analysis are shown.
Realizable optimal control for a remotely piloted research vehicle. [stability augmentation
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
The design of a control system using the linear-quadratic regulator (LQR) control law theory for time invariant systems in conjunction with an incremental gradient procedure is presented. The incremental gradient technique reduces the full-state feedback controller design, generated by the LQR algorithm, to a realizable design. With a realizable controller, the feedback gains are based only on the available system outputs instead of being based on the full-state outputs. The design is for a remotely piloted research vehicle (RPRV) stability augmentation system. The design includes methods for accounting for noisy measurements, discrete controls with zero-order-hold outputs, and computational delay errors. Results from simulation studies of the response of the RPRV to a step in the elevator and frequency analysis techniques are included to illustrate these abnormalities and their influence on the controller design.
NASA Astrophysics Data System (ADS)
Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.
2018-02-01
An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.
Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Mohamed, Zeehaida
2012-01-01
Malaria is one of the serious global health problem, causing widespread sufferings and deaths in various parts of the world. With the large number of cases diagnosed over the year, early detection and accurate diagnosis which facilitates prompt treatment is an essential requirement to control malaria. For centuries now, manual microscopic examination of blood slide remains the gold standard for malaria diagnosis. However, low contrast of the malaria and variable smears quality are some factors that may influence the accuracy of interpretation by microbiologists. In order to reduce this problem, this paper aims to investigate the performance of the proposed contrast enhancement techniques namely, modified global and modified linear contrast stretching as well as the conventional global and linear contrast stretching that have been applied on malaria images of P. vivax species. The results show that the proposed modified global and modified linear contrast stretching techniques have successfully increased the contrast of the parasites and the infected red blood cells compared to the conventional global and linear contrast stretching. Hence, the resultant images would become useful to microbiologists for identification of various stages and species of malaria.
Overview of the CHarring Ablator Response (CHAR) Code
NASA Technical Reports Server (NTRS)
Amar, Adam J.; Oliver, A. Brandon; Kirk, Benjamin S.; Salazar, Giovanni; Droba, Justin
2016-01-01
An overview of the capabilities of the CHarring Ablator Response (CHAR) code is presented. CHAR is a one-, two-, and three-dimensional unstructured continuous Galerkin finite-element heat conduction and ablation solver with both direct and inverse modes. Additionally, CHAR includes a coupled linear thermoelastic solver for determination of internal stresses induced from the temperature field and surface loading. Background on the development process, governing equations, material models, discretization techniques, and numerical methods is provided. Special focus is put on the available boundary conditions including thermochemical ablation and contact interfaces, and example simulations are included. Finally, a discussion of ongoing development efforts is presented.
Graph-cut based discrete-valued image reconstruction.
Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim
2015-05-01
Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.
Calibration of the optical torque wrench.
Pedaci, Francesco; Huang, Zhuangxiong; van Oene, Maarten; Dekker, Nynke H
2012-02-13
The optical torque wrench is a laser trapping technique that expands the capability of standard optical tweezers to torque manipulation and measurement, using the laser linear polarization to orient tailored microscopic birefringent particles. The ability to measure torque of the order of kBT (∼4 pN nm) is especially important in the study of biophysical systems at the molecular and cellular level. Quantitative torque measurements rely on an accurate calibration of the instrument. Here we describe and implement a set of calibration approaches for the optical torque wrench, including methods that have direct analogs in linear optical tweezers as well as introducing others that are specifically developed for the angular variables. We compare the different methods, analyze their differences, and make recommendations regarding their implementations.
NASA Astrophysics Data System (ADS)
Gorbunov, Michael E.; Cardellach, Estel; Lauritsen, Kent B.
2018-03-01
Linear and non-linear representations of wave fields constitute the basis of modern algorithms for analysis of radio occultation (RO) data. Linear representations are implemented by Fourier Integral Operators, which allow for high-resolution retrieval of bending angles. Non-linear representations include Wigner Distribution Function (WDF), which equals the pseudo-density of energy in the ray space. Representations allow for filtering wave fields by suppressing some areas of the ray space and mapping the field back from the transformed space to the initial one. We apply this technique to the retrieval of reflected rays from RO observations. The use of reflected rays may increase the accuracy of the retrieval of the atmospheric refractivity. Reflected rays can be identified by the visual inspection of WDF or spectrogram plots. Numerous examples from COSMIC data indicate that reflections are mostly observed over oceans or snow, in particular over Antarctica. We introduce the reflection index that characterizes the relative intensity of the reflected ray with respect to the direct ray. The index allows for the automatic identification of events with reflections. We use the radio holographic estimate of the errors of the retrieved bending angle profiles of reflected rays. A comparison of indices evaluated for a large base of events including the visual identification of reflections indicated a good agreement with our definition of reflection index.
Right-Sizing Statistical Models for Longitudinal Data
Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.
2015-01-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
Trautwein, C.M.; Rowan, L.C.
1987-01-01
Linear structural features and hydrothermally altered rocks that were interpreted from Landsat data have been used by the U.S. Geological Survey (USGS) in regional mineral resource appraisals for more than a decade. In the past, linear features and alterations have been incorporated into models for assessing mineral resources potential by manually overlaying these and other data sets. Recently, USGS research into computer-based geographic information systems (GIS) for mineral resources assessment programs has produced several new techniques for data analysis, quantification, and integration to meet assessment objectives.
Composite Materials Characterization and Development at AFWAL
NASA Technical Reports Server (NTRS)
Browning, C. E.
1984-01-01
The development of test methodology for characterizing matrix dominated failure modes is discussed emphasizing issues of matrix cracking, delamination under static loading, and the relationship of composite properties to matrix properties. Both strength characterization and classical techniques of linear elastic fracture mechanics were examined. Materials development studies are also discussed. Major areas of interest include acetylene-terminated and bismaleimide resins for 350 to 450 deg use, thermoplastics development, and failure resistant composite concepts.
Apparatus and method for epileptic seizure detection using non-linear techniques
Hively, Lee M.; Clapp, Ned E.; Daw, C. Stuart; Lawkins, William F.
1998-01-01
Methods and apparatus for automatically detecting epileptic seizures by monitoring and analyzing brain wave (EEG or MEG) signals. Steps include: acquiring the brain wave data from the patient; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; determining that one or more trends in the nonlinear measures indicate a seizure, and providing notification of seizure occurrence.
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
New Techniques for Exciting Linearly Tapered Slot Antennas with Coplanar Waveguide
NASA Technical Reports Server (NTRS)
Simons, R. N.; Lee, R. Q.; Perl, T. D.
1992-01-01
Two new techniques for exciting a linearly tapered slot antenna (LTSA) with coplanar waveguide (CPW) are introduced. In the first approach, an air bridge is used to couple power from a CPW to an LTSA. In the second approach, power is electromagnetically coupled from a finite CPW (FCPW) to an LTSA. Measured results at 18 GHz show excellent return loss and radiation patterns.
Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein
2018-09-15
A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2 = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo; Erdmann, Andreas
2014-03-10
We propose an in situ aberration measurement technique based on an analytical linear model of through-focus aerial images. The aberrations are retrieved from aerial images of six isolated space patterns, which have the same width but different orientations. The imaging formulas of the space patterns are investigated and simplified, and then an analytical linear relationship between the aerial image intensity distributions and the Zernike coefficients is established. The linear relationship is composed of linear fitting matrices and rotation matrices, which can be calculated numerically in advance and utilized to retrieve Zernike coefficients. Numerical simulations using the lithography simulators PROLITH and Dr.LiTHO demonstrate that the proposed method can measure wavefront aberrations up to Z(37). Experiments on a real lithography tool confirm that our method can monitor lens aberration offset with an accuracy of 0.7 nm.
Calculation of cogging force in a novel slotted linear tubular brushless permanent magnet motor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z.Q.; Hor, P.J.; Howe, D.
1997-09-01
There is an increasing requirement for controlled linear motion over short and long strokes, in the factory automation and packaging industries, for example. Linear brushless PM motors could offer significant advantages over conventional actuation technologies, such as motor driven cams and linkages and pneumatic rams--in terms of efficiency, operating bandwidth, speed and thrust control, stroke and positional accuracy, and indeed over other linear motor technologies, such as induction motors. Here, a finite element/analytical based technique for the prediction of cogging force in a novel topology of slotted linear brushless permanent magnet motor has been developed and validated. The various forcemore » components, which influence cogging are pre-calculated by the finite element analysis of some basic magnetic structures, facilitate the analytical synthesis of the resultant cogging force. The technique can be used to aid design for the minimization of cogging.« less
Nonlinear compensation techniques for magnetic suspension systems. Ph.D. Thesis - MIT
NASA Technical Reports Server (NTRS)
Trumper, David L.
1991-01-01
In aerospace applications, magnetic suspension systems may be required to operate over large variations in air-gap. Thus the nonlinearities inherent in most types of suspensions have a significant effect. Specifically, large variations in operating point may make it difficult to design a linear controller which gives satisfactory stability and performance over a large range of operating points. One way to address this problem is through the use of nonlinear compensation techniques such as feedback linearization. Nonlinear compensators have received limited attention in the magnetic suspension literature. In recent years, progress has been made in the theory of nonlinear control systems, and in the sub-area of feedback linearization. The idea is demonstrated of feedback linearization using a second order suspension system. In the context of the second order suspension, sampling rate issues in the implementation of feedback linearization are examined through simulation.
Modelling and control of a microgrid including photovoltaic and wind generation
NASA Astrophysics Data System (ADS)
Hussain, Mohammed Touseef
Extensive increase of distributed generation (DG) penetration and the existence of multiple DG units at distribution level have introduced the notion of micro-grid. This thesis develops a detailed non-linear and small-signal dynamic model of a microgrid that includes PV, wind and conventional small scale generation along with their power electronics interfaces and the filters. The models developed evaluate the amount of generation mix from various DGs for satisfactory steady state operation of the microgrid. In order to understand the interaction of the DGs on microgrid system initially two simpler configurations were considered. The first one consists of microalternator, PV and their electronics, and the second system consists of microalternator and wind system each connected to the power system grid. Nonlinear and linear state space model of each microgrid are developed. Small signal analysis showed that the large participation of PV/wind can drive the microgrid to the brink of unstable region without adequate control. Non-linear simulations are carried out to verify the results obtained through small-signal analysis. The role of the extent of generation mix of a composite microgrid consisting of wind, PV and conventional generation was investigated next. The findings of the smaller systems were verified through nonlinear and small signal modeling. A central supervisory capacitor energy storage controller interfaced through a STATCOM was proposed to monitor and enhance the microgrid operation. The potential of various control inputs to provide additional damping to the system has been evaluated through decomposition techniques. The signals identified to have damping contents were employed to design the supervisory control system. The controller gains were tuned through an optimal pole placement technique. Simulation studies demonstrate that the STATCOM voltage phase angle and PV inverter phase angle were the best inputs for enhanced stability boundaries.
Spatial effect of new municipal solid waste landfill siting using different guidelines.
Ahmad, Siti Zubaidah; Ahamad, Mohd Sanusi S; Yusoff, Mohd Suffian
2014-01-01
Proper implementation of landfill siting with the right regulations and constraints can prevent undesirable long-term effects. Different countries have respective guidelines on criteria for new landfill sites. In this article, we perform a comparative study of municipal solid waste landfill siting criteria stated in the policies and guidelines of eight different constitutional bodies from Malaysia, Australia, India, U.S.A., Europe, China and the Middle East, and the World Bank. Subsequently, a geographic information system (GIS) multi-criteria evaluation model was applied to determine new suitable landfill sites using different criterion parameters using a constraint mapping technique and weighted linear combination. Application of Macro Modeler provided in the GIS-IDRISI Andes software helps in building and executing multi-step models. In addition, the analytic hierarchy process technique was included to determine the criterion weight of the decision maker's preferences as part of the weighted linear combination procedure. The differences in spatial results of suitable sites obtained signifies that dissimilarity in guideline specifications and requirements will have an effect on the decision-making process.
Multigrid direct numerical simulation of the whole process of flow transition in 3-D boundary layers
NASA Technical Reports Server (NTRS)
Liu, Chaoqun; Liu, Zhining
1993-01-01
A new technology was developed in this study which provides a successful numerical simulation of the whole process of flow transition in 3-D boundary layers, including linear growth, secondary instability, breakdown, and transition at relatively low CPU cost. Most other spatial numerical simulations require high CPU cost and blow up at the stage of flow breakdown. A fourth-order finite difference scheme on stretched and staggered grids, a fully implicit time marching technique, a semi-coarsening multigrid based on the so-called approximate line-box relaxation, and a buffer domain for the outflow boundary conditions were all used for high-order accuracy, good stability, and fast convergence. A new fine-coarse-fine grid mapping technique was developed to keep the code running after the laminar flow breaks down. The computational results are in good agreement with linear stability theory, secondary instability theory, and some experiments. The cost for a typical case with 162 x 34 x 34 grid is around 2 CRAY-YMP CPU hours for 10 T-S periods.
An uncertainty model of acoustic metamaterials with random parameters
NASA Astrophysics Data System (ADS)
He, Z. C.; Hu, J. Y.; Li, Eric
2018-01-01
Acoustic metamaterials (AMs) are man-made composite materials. However, the random uncertainties are unavoidable in the application of AMs due to manufacturing and material errors which lead to the variance of the physical responses of AMs. In this paper, an uncertainty model based on the change of variable perturbation stochastic finite element method (CVPS-FEM) is formulated to predict the probability density functions of physical responses of AMs with random parameters. Three types of physical responses including the band structure, mode shapes and frequency response function of AMs are studied in the uncertainty model, which is of great interest in the design of AMs. In this computation, the physical responses of stochastic AMs are expressed as linear functions of the pre-defined random parameters by using the first-order Taylor series expansion and perturbation technique. Then, based on the linear function relationships of parameters and responses, the probability density functions of the responses can be calculated by the change-of-variable technique. Three numerical examples are employed to demonstrate the effectiveness of the CVPS-FEM for stochastic AMs, and the results are validated by Monte Carlo method successfully.
Aerodynamics of a linear oscillating cascade
NASA Technical Reports Server (NTRS)
Buffum, Daniel H.; Fleeter, Sanford
1990-01-01
The steady and unsteady aerodynamics of a linear oscillating cascade are investigated using experimental and computational methods. Experiments are performed to quantify the torsion mode oscillating cascade aerodynamics of the NASA Lewis Transonic Oscillating Cascade for subsonic inlet flowfields using two methods: simultaneous oscillation of all the cascaded airfoils at various values of interblade phase angle, and the unsteady aerodynamic influence coefficient technique. Analysis of these data and correlation with classical linearized unsteady aerodynamic analysis predictions indicate that the wind tunnel walls enclosing the cascade have, in some cases, a detrimental effect on the cascade unsteady aerodynamics. An Euler code for oscillating cascade aerodynamics is modified to incorporate improved upstream and downstream boundary conditions and also the unsteady aerodynamic influence coefficient technique. The new boundary conditions are shown to improve the unsteady aerodynamic influence coefficient technique. The new boundary conditions are shown to improve the unsteady aerodynamic predictions of the code, and the computational unsteady aerodynamic influence coefficient technique is shown to be a viable alternative for calculation of oscillating cascade aerodynamics.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
End State: The Fallacy of Modern Military Planning
2017-04-06
operational planning for non -linear, complex scenarios requires application of non -linear, advanced planning techniques such as design methodology ...cannot be approached in a linear, mechanistic manner by a universal planning methodology . Theater/global campaign plans and theater strategies offer no...strategic environments, and instead prescribes a universal linear methodology that pays no mind to strategic complexity. This universal application
From linear mechanics to nonlinear mechanics
NASA Technical Reports Server (NTRS)
Loeb, Julian
1955-01-01
Consideration is given to the techniques used in telecommunication where a nonlinear system (the modulator) results in a linear transposition of a signal. It is then shown that a similar method permits linearization of electromechanical devices or nonlinear mechanical devices. A sweep function plays the same role as the carrier wave in radio-electricity. The linearizations of certain nonlinear functionals are presented.
A review on creatinine measurement techniques.
Mohabbati-Kalejahi, Elham; Azimirad, Vahid; Bahrami, Manouchehr; Ganbari, Ahmad
2012-08-15
This paper reviews the entire recent global tendency for creatinine measurement. Creatinine biosensors involve complex relationships between biology and micro-mechatronics to which the blood is subjected. Comparison between new and old methods shows that new techniques (e.g. Molecular Imprinted Polymers based algorithms) are better than old methods (e.g. Elisa) in terms of stability and linear range. All methods and their details for serum, plasma, urine and blood samples are surveyed. They are categorized into five main algorithms: optical, electrochemical, impedometrical, Ion Selective Field-Effect Transistor (ISFET) based technique and chromatography. Response time, detection limit, linear range and selectivity of reported sensors are discussed. Potentiometric measurement technique has the lowest response time of 4-10 s and the lowest detection limit of 0.28 nmol L(-1) belongs to chromatographic technique. Comparison between various techniques of measurements indicates that the best selectivity belongs to MIP based and chromatographic techniques. Copyright © 2012 Elsevier B.V. All rights reserved.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
A Flexible CUDA LU-based Solver for Small, Batched Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste
This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less
Cole-Cole, linear and multivariate modeling of capacitance data for on-line monitoring of biomass.
Dabros, Michal; Dennewald, Danielle; Currie, David J; Lee, Mark H; Todd, Robert W; Marison, Ian W; von Stockar, Urs
2009-02-01
This work evaluates three techniques of calibrating capacitance (dielectric) spectrometers used for on-line monitoring of biomass: modeling of cell properties using the theoretical Cole-Cole equation, linear regression of dual-frequency capacitance measurements on biomass concentration, and multivariate (PLS) modeling of scanning dielectric spectra. The performance and robustness of each technique is assessed during a sequence of validation batches in two experimental settings of differing signal noise. In more noisy conditions, the Cole-Cole model had significantly higher biomass concentration prediction errors than the linear and multivariate models. The PLS model was the most robust in handling signal noise. In less noisy conditions, the three models performed similarly. Estimates of the mean cell size were done additionally using the Cole-Cole and PLS models, the latter technique giving more satisfactory results.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
Intensity Modulation Techniques for Continuous-Wave Lidar for Column CO2 Measurements
NASA Astrophysics Data System (ADS)
Campbell, J. F.; Lin, B.; Obland, M. D.; Kooi, S. A.; Fan, T. F.; Meadows, B.; Browell, E. V.; Erxleben, W. H.; McGregor, D.; Dobler, J. T.; Pal, S.; O'Dell, C.
2017-12-01
Global and regional atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission and the Atmospheric Carbon and Transport (ACT) - America project are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space and airborne platforms to meet the ASCENDS and ACT-America science measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) and Linear Swept Frequency modulations to uniquely discriminate surface lidar returns from intermediate aerosol and cloud returns. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that take advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques and provides very high (at sub-meter level) range resolution. We compare BPSK to linear swept frequency and introduce a new technique to eliminate sidelobes in situations from linear swept frequency where the SNR is high with results that rival BPSK. We also investigate the effects of non-linear modulators, which can in some circumstances degrade the orthogonality of the waveforms, and show how to avoid this. These techniques are used in a new data processing architecture written in the C language to support the ASCENDS CarbonHawk Experiment Simulator (ACES) and ACT-America programs.
NASA Astrophysics Data System (ADS)
Bordovsky, Michal; Catrysse, Peter; Dods, Steven; Freitas, Marcio; Klein, Jackson; Kotacka, Libor; Tzolov, Velko; Uzunov, Ivan M.; Zhang, Jiazong
2004-05-01
We present the state of the art for commercial design and simulation software in the 'front end' of photonic circuit design. One recent advance is to extend the flexibility of the software by using more than one numerical technique on the same optical circuit. There are a number of popular and proven techniques for analysis of photonic devices. Examples of these techniques include the Beam Propagation Method (BPM), the Coupled Mode Theory (CMT), and the Finite Difference Time Domain (FDTD) method. For larger photonic circuits, it may not be practical to analyze the whole circuit by any one of these methods alone, but often some smaller part of the circuit lends itself to at least one of these standard techniques. Later the whole problem can be analyzed on a unified platform. This kind of approach can enable analysis for cases that would otherwise be cumbersome, or even impossible. We demonstrate solutions for more complex structures ranging from the sub-component layout, through the entire device characterization, to the mask layout and its editing. We also present recent advances in the above well established techniques. This includes the analysis of nano-particles, metals, and non-linear materials by FDTD, photonic crystal design and analysis, and improved models for high concentration Er/Yb co-doped glass waveguide amplifiers.
Effect of Processing Conditions on the Anelastic Behavior of Plasma Sprayed Thermal Barrier Coatings
NASA Astrophysics Data System (ADS)
Viswanathan, Vaishak
2011-12-01
Plasma sprayed ceramic materials contain an assortment of micro-structural defects, including pores, cracks, and interfaces arising from the droplet based assemblage of the spray deposition technique. The defective architecture of the deposits introduces a novel "anelastic" response in the coatings comprising of their non-linear and hysteretic stress-strain relationship under mechanical loading. It has been established that this anelasticity can be attributed to the relative movement of the embedded defects under varying stresses. While the non-linear response of the coatings arises from the opening/closure of defects, hysteresis is produced by the frictional sliding among defect surfaces. Recent studies have indicated that anelastic behavior of coatings can be a unique descriptor of their mechanical behavior and related to the defect configuration. In this dissertation, a multi-variable study employing systematic processing strategies was conducted to augment the understanding on various aspects of the reported anelastic behavior. A bi-layer curvature measurement technique was adapted to measure the anelastic properties of plasma sprayed ceramic. The quantification of anelastic parameters was done using a non-linear model proposed by Nakamura et.al. An error analysis was conducted on the technique to know the available margins for both experimental as well as computational errors. The error analysis was extended to evaluate its sensitivity towards different coating microstructure. For this purpose, three coatings with significantly different microstructures were fabricated via tuning of process parameters. Later the three coatings were also subjected to different strain ranges systematically, in order to understand the origin and evolution of anelasticity on different microstructures. The last segment of this thesis attempts to capture the intricacies on the processing front and tries to evaluate and establish a correlation between them and the anelastic parameters.
Machine learning techniques for energy optimization in mobile embedded systems
NASA Astrophysics Data System (ADS)
Donohoo, Brad Kyoshi
Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.
Banach, Marzena; Wasilewska, Agnieszka; Dlugosz, Rafal; Pauk, Jolanta
2018-05-18
Due to the problem of aging societies, there is a need for smart buildings to monitor and support people with various disabilities, including rheumatoid arthritis. The aim of this paper is to elaborate on novel techniques for wireless motion capture systems for the monitoring and rehabilitation of disabled people for application in smart buildings. The proposed techniques are based on cross-verification of distance measurements between markers and transponders in an environment with highly variable parameters. To their verification, algorithms that enable comprehensive investigation of a system with different numbers of transponders and varying ambient parameters (temperature and noise) were developed. In the estimation of the real positions of markers, various linear and nonlinear filters were used. Several thousand tests were carried out for various system parameters and different marker locations. The results show that localization error may be reduced by as much as 90%. It was observed that repetition of measurement reduces localization error by as much as one order of magnitude. The proposed system, based on wireless techniques, offers a high commercial potential. However, it requires extensive cooperation between teams, including hardware and software design, system modelling, and architectural design.
NASA Astrophysics Data System (ADS)
Dhingra, Shonali; Sandler, Roman; Rios, Rodrigo; Vuong, Cliff; Mehta, Mayank
All animals naturally perceive the abstract concept of space-time. A brain region called the Hippocampus is known to be important in creating these perceptions, but the underlying mechanisms are unknown. In our lab we employ several experimental and computational techniques from Physics to tackle this fundamental puzzle. Experimentally, we use ideas from Nanoscience and Materials Science to develop techniques to measure the activity of hippocampal neurons, in freely-behaving animals. Computationally, we develop models to study neuronal activity patterns, which are point processes that are highly stochastic and multidimensional. We then apply these techniques to collect and analyze neuronal signals from rodents while they're exploring space in Real World or Virtual Reality with various stimuli. Our findings show that under these conditions neuronal activity depends on various parameters, such as sensory cues including visual and auditory, and behavioral cues including, linear and angular, position and velocity. Further, neuronal networks create internally-generated rhythms, which influence perception of space and time. In totality, these results further our understanding of how the brain develops a cognitive map of our surrounding space, and keep track of time.
Fixed gain and adaptive techniques for rotorcraft vibration control
NASA Technical Reports Server (NTRS)
Roy, R. H.; Saberi, H. A.; Walker, R. A.
1985-01-01
The results of an analysis effort performed to demonstrate the feasibility of employing approximate dynamical models and frequency shaped cost functional control law desgin techniques for helicopter vibration suppression are presented. Both fixed gain and adaptive control designs based on linear second order dynamical models were implemented in a detailed Rotor Systems Research Aircraft (RSRA) simulation to validate these active vibration suppression control laws. Approximate models of fuselage flexibility were included in the RSRA simulation in order to more accurately characterize the structural dynamics. The results for both the fixed gain and adaptive approaches are promising and provide a foundation for pursuing further validation in more extensive simulation studies and in wind tunnel and/or flight tests.
Integer Linear Programming in Computational Biology
NASA Astrophysics Data System (ADS)
Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut
Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.
A computational algorithm for spacecraft control and momentum management
NASA Technical Reports Server (NTRS)
Dzielski, John; Bergmann, Edward; Paradiso, Joseph
1990-01-01
Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
A Unique Technique to get Kaprekar Iteration in Linear Programming Problem
NASA Astrophysics Data System (ADS)
Sumathi, P.; Preethy, V.
2018-04-01
This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.
Sequential design of discrete linear quadratic regulators via optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar
1989-01-01
A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.
Linear Estimation of Particle Bulk Parameters from Multi-Wavelength Lidar Measurements
NASA Technical Reports Server (NTRS)
Veselovskii, Igor; Dubovik, Oleg; Kolgotin, A.; Korenskiy, M.; Whiteman, D. N.; Allakhverdiev, K.; Huseyinoglu, F.
2012-01-01
An algorithm for linear estimation of aerosol bulk properties such as particle volume, effective radius and complex refractive index from multiwavelength lidar measurements is presented. The approach uses the fact that the total aerosol concentration can well be approximated as a linear combination of aerosol characteristics measured by multiwavelength lidar. Therefore, the aerosol concentration can be estimated from lidar measurements without the need to derive the size distribution, which entails more sophisticated procedures. The definition of the coefficients required for the linear estimates is based on an expansion of the particle size distribution in terms of the measurement kernels. Once the coefficients are established, the approach permits fast retrieval of aerosol bulk properties when compared with the full regularization technique. In addition, the straightforward estimation of bulk properties stabilizes the inversion making it more resistant to noise in the optical data. Numerical tests demonstrate that for data sets containing three aerosol backscattering and two extinction coefficients (so called 3 + 2 ) the uncertainties in the retrieval of particle volume and surface area are below 45% when input data random uncertainties are below 20 %. Moreover, using linear estimates allows reliable retrievals even when the number of input data is reduced. To evaluate the approach, the results obtained using this technique are compared with those based on the previously developed full inversion scheme that relies on the regularization procedure. Both techniques were applied to the data measured by multiwavelength lidar at NASA/GSFC. The results obtained with both methods using the same observations are in good agreement. At the same time, the high speed of the retrieval using linear estimates makes the method preferable for generating aerosol information from extended lidar observations. To demonstrate the efficiency of the method, an extended time series of observations acquired in Turkey in May 2010 was processed using the linear estimates technique permitting, for what we believe to be the first time, temporal-height distributions of particle parameters.
Myoglobin structure and function: A multiweek biochemistry laboratory project.
Silverstein, Todd P; Kirk, Sarah R; Meyer, Scott C; Holman, Karen L McFarlane
2015-01-01
We have developed a multiweek laboratory project in which students isolate myoglobin and characterize its structure, function, and redox state. The important laboratory techniques covered in this project include size-exclusion chromatography, electrophoresis, spectrophotometric titration, and FTIR spectroscopy. Regarding protein structure, students work with computer modeling and visualization of myoglobin and its homologues, after which they spectroscopically characterize its thermal denaturation. Students also study protein function (ligand binding equilibrium) and are instructed on topics in data analysis (calibration curves, nonlinear vs. linear regression). This upper division biochemistry laboratory project is a challenging and rewarding one that not only exposes students to a wide variety of important biochemical laboratory techniques but also ties those techniques together to work with a single readily available and easily characterized protein, myoglobin. © 2015 International Union of Biochemistry and Molecular Biology.
A tire contact solution technique
NASA Technical Reports Server (NTRS)
Tielking, J. T.
1983-01-01
An efficient method for calculating the contact boundary and interfacial pressure distribution was developed. This solution technique utilizes the discrete Fourier transform to establish an influence coefficient matrix for the portion of the pressurized tire surface that may be in the contact region. This matrix is used in a linear algebra algorithm to determine the contact boundary and the array of forces within the boundary that are necessary to hold the tire in equilibrium against a specified contact surface. The algorithm also determines the normal and tangential displacements of those points on the tire surface that are included in the influence coefficient matrix. Displacements within and outside the contact region are calculated. The solution technique is implemented with a finite-element tire model that is based on orthotropic, nonlinear shell of revolution elements which can respond to nonaxisymmetric loads. A sample contact solution is presented.
Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497
Simultaneously driven linear and nonlinear spatial encoding fields in MRI.
Gallichan, Daniel; Cocosco, Chris A; Dewdney, Andrew; Schultz, Gerrit; Welz, Anna; Hennig, Jürgen; Zaitsev, Maxim
2011-03-01
Spatial encoding in MRI is conventionally achieved by the application of switchable linear encoding fields. The general concept of the recently introduced PatLoc (Parallel Imaging Technique using Localized Gradients) encoding is to use nonlinear fields to achieve spatial encoding. Relaxing the requirement that the encoding fields must be linear may lead to improved gradient performance or reduced peripheral nerve stimulation. In this work, a custom-built insert coil capable of generating two independent quadratic encoding fields was driven with high-performance amplifiers within a clinical MR system. In combination with the three linear encoding fields, the combined hardware is capable of independently manipulating five spatial encoding fields. With the linear z-gradient used for slice-selection, there remain four separate channels to encode a 2D-image. To compare trajectories of such multidimensional encoding, the concept of a local k-space is developed. Through simulations, reconstructions using six gradient-encoding strategies were compared, including Cartesian encoding separately or simultaneously on both PatLoc and linear gradients as well as two versions of a radial-based in/out trajectory. Corresponding experiments confirmed that such multidimensional encoding is practically achievable and demonstrated that the new radial-based trajectory offers the PatLoc property of variable spatial resolution while maintaining finite resolution across the entire field-of-view. Copyright © 2010 Wiley-Liss, Inc.
Strong variable linear polarization in the cool active star II Peg
NASA Astrophysics Data System (ADS)
Rosén, Lisa; Kochukhov, Oleg; Wade, Gregg A.
2014-08-01
Magnetic fields of cool active stars are currently studied polarimetrically using only circular polarization observations. This provides limited information about the magnetic field geometry since circular polarization is only sensitive to the line-of-sight component of the magnetic field. Reconstructions of the magnetic field topology will therefore not be completely trustworthy when only circular polarization is used. On the other hand, linear polarization is sensitive to the transverse component of the magnetic field. By including linear polarization in the reconstruction the quality of the reconstructed magnetic map is dramatically improved. For that reason, we wanted to identify cool stars for which linear polarization could be detected at a level sufficient for magnetic imaging. Four active RS CVn binaries, II Peg, HR 1099, IM Peg, and σ Gem were observed with the ESPaDOnS spectropolarimeter at the Canada-France-Hawaii Telescope. Mean polarization profiles in all four Stokes parameters were derived using the multi-line technique of least-squares deconvolution (LSD). Not only was linear polarization successfully detected in all four stars in at least one observation, but also, II Peg showed an extraordinarily strong linear polarization signature throughout all observations. This qualifies II Peg as the first promising target for magnetic Doppler imaging in all four Stokes parameters and, at the same time, suggests that other such targets can possibly be identified.
Coronal Axis Measurement of the Optic Nerve Sheath Diameter Using a Linear Transducer.
Amini, Richard; Stolz, Lori A; Patanwala, Asad E; Adhikari, Srikar
2015-09-01
The true optic nerve sheath diameter cutoff value for detecting elevated intracranial pressure is variable. The variability may stem from the technique used to acquire sonographic measurements of the optic nerve sheath diameter as well as sonographic artifacts inherent to the technique. The purpose of this study was to compare the traditional visual axis technique to an infraorbital coronal axis technique for assessing the optic nerve sheath diameter using a high-frequency linear array transducer. We conducted a cross-sectional study at an academic medical center. Timed optic nerve sheath diameter measurements were obtained on both eyes of healthy adult volunteers with a 10-5-MHz broadband linear array transducer using both traditional visual axis and coronal axis techniques. Optic nerve sheath diameter measurements were obtained by 2 sonologists who graded the difficulty of each technique and were blinded to each other's measurements for each participant. A total of 42 volunteers were enrolled, yielding 84 optic nerve sheath diameter measurements. There were no significant differences in the measurements between the techniques on either eye (P = .23 [right]; P = .99 [left]). Additionally, there was no difference in the degree of difficulty obtaining the measurements between the techniques (P = .16). There was a statistically significant difference in the time required to obtain the measurements between the traditional and coronal techniques (P < .05). Infraorbital coronal axis measurements are similar to measurements obtained in the traditional visual axis. The infraorbital coronal axis technique is slightly faster to perform and is not technically challenging. © 2015 by the American Institute of Ultrasound in Medicine.
Active galactic nuclei as cosmological probes.
NASA Astrophysics Data System (ADS)
Lusso, Elisabeta; Risaliti, Guido
2018-01-01
I will present the latest results on our analysis of the non-linear X-ray to UV relation in a sample of optically selected quasars from the Sloan Digital Sky Survey, cross-matched with the most recent XMM-Newton and Chandra catalogues. I will show that this correlation is not only very tight, but can be potentially even tighter by including a further dependence on the emission line full-width half maximum. This result imply that the non-linear X-ray to optical-ultraviolet luminosity relation is the manifestation of an ubiquitous physical mechanism, whose details are still unknown, that regulates the energy transfer from the accretion disc to the X-ray emitting corona in quasars. I will discuss what the perspectives of AGN in the context of observational cosmology are. I will introduce a novel technique to test the cosmological model using quasars as “standard candles” by employing the non-linear X-ray to UV relation as an absolute distance indicator.
Schubert, Christopher P J; Müller, Carsten; Bogner, Andreas; Giesselmann, Frank; Lemieux, Robert P
2017-05-14
Structural variants of the 'de Vries-like' mesogen 5-[4-(12,12,14,14,16,16-hexamethyl-12,14,16-trisilaheptadecyloxy)phenyl]-2-hexyloxypyrimidine (QL16-6), including two isomers with branched iso-tricarbosilane end-groups, were synthesized and their mesomorphic and 'de Vries-like' properties were characterized by polarized optical microscopy, differential scanning calorimetry, small angle and 2D X-ray scattering techniques. A comparative analysis of isomers with linear and branched tricarbosilane end-groups shows that they exhibit comparable mesomorphic and 'de Vries-like' properties. Furthermore, the difference in effective molecular length L eff between the linear and branched isomers in the SmA and SmC phases (ca. 4-5 Å), which was derived from 2D X-ray scattering experiments, suggests that the linear tricarbosilane end-group is hemispherical in shape on the time-average, as predicted by a DFT conformational analysis at the B3LYP/6-31G* level.
Design and characterization of a linear Hencken-type burner
NASA Astrophysics Data System (ADS)
Campbell, M. F.; Bohlin, G. A.; Schrader, P. E.; Bambha, R. P.; Kliewer, C. J.; Johansson, K. O.; Michelsen, H. A.
2016-11-01
We have designed and constructed a Hencken-type burner that produces a 38-mm-long linear laminar partially premixed co-flow diffusion flame. This burner was designed to produce a linear flame for studies of soot chemistry, combining the benefit of the conventional Hencken burner's laminar flames with the advantage of the slot burner's geometry for optical measurements requiring a long interaction distance. It is suitable for measurements using optical imaging diagnostics, line-of-sight optical techniques, or off-axis optical-scattering methods requiring either a long or short path length through the flame. This paper presents details of the design and operation of this new burner. We also provide characterization information for flames produced by this burner, including relative flow-field velocities obtained using hot-wire anemometry, temperatures along the centerline extracted using direct one-dimensional coherent Raman imaging, soot volume fractions along the centerline obtained using laser-induced incandescence and laser extinction, and transmission electron microscopy images of soot thermophoretically sampled from the flame.
Gender classification of running subjects using full-body kinematics
NASA Astrophysics Data System (ADS)
Williams, Christina M.; Flora, Jeffrey B.; Iftekharuddin, Khan M.
2016-05-01
This paper proposes novel automated gender classification of subjects while engaged in running activity. The machine learning techniques include preprocessing steps using principal component analysis followed by classification with linear discriminant analysis, and nonlinear support vector machines, and decision-stump with AdaBoost. The dataset consists of 49 subjects (25 males, 24 females, 2 trials each) all equipped with approximately 80 retroreflective markers. The trials are reflective of the subject's entire body moving unrestrained through a capture volume at a self-selected running speed, thus producing highly realistic data. The classification accuracy using leave-one-out cross validation for the 49 subjects is improved from 66.33% using linear discriminant analysis to 86.74% using the nonlinear support vector machine. Results are further improved to 87.76% by means of implementing a nonlinear decision stump with AdaBoost classifier. The experimental findings suggest that the linear classification approaches are inadequate in classifying gender for a large dataset with subjects running in a moderately uninhibited environment.
Input-output characterization of an ultrasonic testing system by digital signal analysis
NASA Technical Reports Server (NTRS)
Karaguelle, H.; Lee, S. S.; Williams, J., Jr.
1984-01-01
The input/output characteristics of an ultrasonic testing system used for stress wave factor measurements were studied. The fundamentals of digital signal processing are summarized. The inputs and outputs are digitized and processed in a microcomputer using digital signal processing techniques. The entire ultrasonic test system, including transducers and all electronic components, is modeled as a discrete-time linear shift-invariant system. Then the impulse response and frequency response of the continuous time ultrasonic test system are estimated by interpolating the defining points in the unit sample response and frequency response of the discrete time system. It is found that the ultrasonic test system behaves as a linear phase bandpass filter. Good results were obtained for rectangular pulse inputs of various amplitudes and durations and for tone burst inputs whose center frequencies are within the passband of the test system and for single cycle inputs of various amplitudes. The input/output limits on the linearity of the system are determined.
NASA Technical Reports Server (NTRS)
Rudolph, T. H.; Perala, R. A.
1983-01-01
The objective of the work reported here is to develop a methodology by which electromagnetic measurements of inflight lightning strike data can be understood and extended to other aircraft. A linear and time invariant approach based on a combination of Fourier transform and three dimensional finite difference techniques is demonstrated. This approach can obtain the lightning channel current in the absence of the aircraft for given channel characteristic impedance and resistive loading. The model is applied to several measurements from the NASA F106B lightning research program. A non-linear three dimensional finite difference code has also been developed to study the response of the F106B to a lightning leader attachment. This model includes three species air chemistry and fluid continuity equations and can incorporate an experimentally based streamer formulation. Calculated responses are presented for various attachment locations and leader parameters. The results are compared qualitatively with measured inflight data.
NASA Technical Reports Server (NTRS)
Wu, Andy
1995-01-01
Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Goetz, Alexander F. H.
1992-01-01
Over the last decade, technological advances in airborne imaging spectrometers, having spectral resolution comparable with laboratory spectrometers, have made it possible to estimate biochemical constituents of vegetation canopies. Wessman estimated lignin concentration from data acquired with NASA's Airborne Imaging Spectrometer (AIS) over Blackhawk Island in Wisconsin. A stepwise linear regression technique was used to determine the single spectral channel or channels in the AIS data that best correlated with measured lignin contents using chemical methods. The regression technique does not take advantage of the spectral shape of the lignin reflectance feature as a diagnostic tool nor the increased discrimination among other leaf components with overlapping spectral features. A nonlinear least squares spectral matching technique was recently reported for deriving both the equivalent water thicknesses of surface vegetation and the amounts of water vapor in the atmosphere from contiguous spectra measured with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The same technique was applied to a laboratory reflectance spectrum of fresh, green leaves. The result demonstrates that the fresh leaf spectrum in the 1.0-2.5 microns region consists of spectral components of dry leaves and the spectral component of liquid water. A linear least squares spectral matching technique for retrieving equivalent water thickness and biochemical components of green vegetation is described.
Gaubas, E; Ceponis, T; Kusakovskij, J
2011-08-01
A technique for the combined measurement of barrier capacitance and spreading resistance profiles using a linearly increasing voltage pulse is presented. The technique is based on the measurement and analysis of current transients, due to the barrier and diffusion capacitance, and the spreading resistance, between a needle probe and sample. To control the impact of deep traps in the barrier capacitance, a steady state bias illumination with infrared light was employed. Measurements of the spreading resistance and barrier capacitance profiles using a stepwise positioned probe on cross sectioned silicon pin diodes and pnp structures are presented.
NASA Astrophysics Data System (ADS)
Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.
2015-05-01
The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO2, Fe2O3, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na2O, K2O, TiO2, and P2O5, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144 channels) relative to the small number of samples studied. The best-performing models were SVR-Lin for SiO2, MgO, Fe2O3, and Na2O, lasso for Al2O3, elastic net for MnO, and PLS-1 for CaO, TiO2, and K2O. Although these differences in model performance between methods were identified, most of the models produce comparable results when p ≤ 0.05 and all techniques except kNN produced statistically-indistinguishable results. It is likely that a combination of models could be used together to yield a lower total error of prediction, depending on the requirements of the user.
Apparatus and method for epileptic seizure detection using non-linear techniques
Hively, L.M.; Clapp, N.E.; Daw, C.S.; Lawkins, W.F.
1998-04-28
Methods and apparatus are disclosed for automatically detecting epileptic seizures by monitoring and analyzing brain wave (EEG or MEG) signals. Steps include: acquiring the brain wave data from the patient; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; determining that one or more trends in the nonlinear measures indicate a seizure, and providing notification of seizure occurrence. 76 figs.
Glaser, I
1982-04-01
By combining a lenslet array with masks it is possible to obtain a noncoherent optical processor capable of computing in parallel generalized 2-D discrete linear transformations. We present here an analysis of such lenslet array processors (LAP). The effect of several errors, including optical aberrations, diffraction, vignetting, and geometrical and mask errors, are calculated, and guidelines to optical design of LAP are derived. Using these results, both ultimate and practical performances of LAP are compared with those of competing techniques.
Delamination growth in composite materials
NASA Technical Reports Server (NTRS)
Gillespie, J. W., Jr.; Carlsson, L. A.; Pipes, R. B.; Rothschilds, R.; Trethewey, B.; Smiley, A.
1986-01-01
The Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) specimens are employed to characterize MODE I and MODE II interlaminar fracture resistance of graphite/epoxy (CYCOM 982) and graphite/PEEK (APC2) composites. Sizing of test specimen geometries to achieve crack growth in the linear elastic regime is presented. Data reduction schemes based upon beam theory are derived for the ENF specimen and include the effects of shear deformation and friction between crack surfaces on compliance, C, and strain energy release rate, G sub II. Finite element (FE) analyses of the ENF geometry including the contact problem with friction are presented to assess the accuracy of beam theory expressions for C and G sub II. Virtual crack closure techniques verify that the ENF specimen is a pure Mode II test. Beam theory expressions are shown to be conservative by 20 to 40 percent for typical unidirectional test specimen geometries. A FE parametric study investigating the influence of delamination length and depth, span, thickness and material properties on G sub II is presented. Mode I and II interlaminar fracture test results are presented. Important experimental parameters are isolated, such as precracking techniques, rate effects, and nonlinear load-deflection response. It is found that subcritical crack growth and inelastic materials behavior, responsible for the observed nonlinearities, are highly rate-dependent phenomena with high rates generally leading to linear elastic response.
Quantitative ultrasonic evaluation of concrete structures using one-sided access
NASA Astrophysics Data System (ADS)
Khazanovich, Lev; Hoegh, Kyle
2016-02-01
Nondestructive diagnostics of concrete structures is an important and challenging problem. A recent introduction of array ultrasonic dry point contact transducer systems offers opportunities for quantitative assessment of the subsurface condition of concrete structures, including detection of defects and inclusions. The methods described in this paper are developed for signal interpretation of shear wave impulse response time histories from multiple fixed distance transducer pairs in a self-contained ultrasonic linear array. This included generalizing Kirchoff migration-based synthetic aperture focusing technique (SAFT) reconstruction methods to handle the spatially diverse transducer pair locations, creating expanded virtual arrays with associated reconstruction methods, and creating automated reconstruction interpretation methods for reinforcement detection and stochastic flaw detection. Interpretation of the reconstruction techniques developed in this study were validated using the results of laboratory and field forensic studies. Applicability of the developed methods for solving practical engineering problems was demonstrated.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
NASA Astrophysics Data System (ADS)
Bertolesi, Elisa; Milani, Gabriele; Poggi, Carlo
2016-12-01
Two FE modeling techniques are presented and critically discussed for the non-linear analysis of tuff masonry panels reinforced with FRCM and subjected to standard diagonal compression tests. The specimens, tested at the University of Naples (Italy), are unreinforced and FRCM retrofitted walls. The extensive characterization of the constituent materials allowed adopting here very sophisticated numerical modeling techniques. In particular, here the results obtained by means of a micro-modeling strategy and homogenization approach are compared. The first modeling technique is a tridimensional heterogeneous micro-modeling where constituent materials (bricks, joints, reinforcing mortar and reinforcing grid) are modeled separately. The second approach is based on a two-step homogenization procedure, previously developed by the authors, where the elementary cell is discretized by means of three-noded plane stress elements and non-linear interfaces. The non-linear structural analyses are performed replacing the homogenized orthotropic continuum with a rigid element and non-linear spring assemblage (RBSM). All the simulations here presented are performed using the commercial software Abaqus. Pros and cons of the two approaches are herein discussed with reference to their reliability in reproducing global force-displacement curves and crack patterns, as well as to the rather different computational effort required by the two strategies.
Reference governors for controlled belt restraint systems
NASA Astrophysics Data System (ADS)
van der Laan, E. P.; Heemels, W. P. M. H.; Luijten, H.; Veldpaus, F. E.; Steinbuch, M.
2010-07-01
Today's restraint systems typically include a number of airbags, and a three-point seat belt with load limiter and pretensioner. For the class of real-time controlled restraint systems, the restraint actuator settings are continuously manipulated during the crash. This paper presents a novel control strategy for these systems. The control strategy developed here is based on a combination of model predictive control and reference management, in which a non-linear device - a reference governor (RG) - is added to a primal closed-loop controlled system. This RG determines an optimal setpoint in terms of injury reduction and constraint satisfaction by solving a constrained optimisation problem. Prediction of the vehicle motion, required to predict future constraint violation, is included in the design and is based on past crash data, using linear regression techniques. Simulation results with MADYMO models show that, with ideal sensors and actuators, a significant reduction (45%) of the peak chest acceleration can be achieved, without prior knowledge of the crash. Furthermore, it is shown that the algorithms are sufficiently fast to be implemented online.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
NASA Technical Reports Server (NTRS)
Cai, Zhiqiang; Manteuffel, Thomas A.; McCormick, Stephen F.
1996-01-01
In this paper, we study the least-squares method for the generalized Stokes equations (including linear elasticity) based on the velocity-vorticity-pressure formulation in d = 2 or 3 dimensions. The least squares functional is defined in terms of the sum of the L(exp 2)- and H(exp -1)-norms of the residual equations, which is weighted appropriately by by the Reynolds number. Our approach for establishing ellipticity of the functional does not use ADN theory, but is founded more on basic principles. We also analyze the case where the H(exp -1)-norm in the functional is replaced by a discrete functional to make the computation feasible. We show that the resulting algebraic equations can be uniformly preconditioned by well-known techniques.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1975-01-01
STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.
On detection of median filtering in digital images
NASA Astrophysics Data System (ADS)
Kirchner, Matthias; Fridrich, Jessica
2010-01-01
In digital image forensics, it is generally accepted that intentional manipulations of the image content are most critical and hence numerous forensic methods focus on the detection of such 'malicious' post-processing. However, it is also beneficial to know as much as possible about the general processing history of an image, including content-preserving operations, since they can affect the reliability of forensic methods in various ways. In this paper, we present a simple yet effective technique to detect median filtering in digital images-a widely used denoising and smoothing operator. As a great variety of forensic methods relies on some kind of a linearity assumption, a detection of non-linear median filtering is of particular interest. The effectiveness of our method is backed with experimental evidence on a large image database.
NASA Technical Reports Server (NTRS)
Kankam, M. David; Rauch, Jeffrey S.; Santiago, Walter
1992-01-01
This paper discusses the effects of variations in system parameters on the dynamic behavior of the Free-Piston Stirling Engine/Linear Alternator (FPSE/LA)-load system. The mathematical formulations incorporate both the mechanical and thermodynamic properties of the FPSE, as well as the electrical equations of the connected load. A state-space technique in the frequency domain is applied to the resulting system of equations to facilitate the evaluation of parametric impacts on the system dynamic stability. Also included is a discussion on the system transient stability as affected by sudden changes in some key operating conditions. Some representative results are correlated with experimental data to verify the model and analytic formulation accuracies. Guidelines are given for ranges of the system parameters which will ensure an overall stable operation.
NASA Technical Reports Server (NTRS)
Kankam, M. D.; Rauch, Jeffrey S.; Santiago, Walter
1992-01-01
This paper discusses the effects of a variations in system parameters on the dynamic behavior of a Free-Piston Stirling Engine/Linear Alternator (FPSE/LA)-load system. The mathematical formulations incorporates both the mechanical and thermodynamic properties of the FPSE, as well as the electrical equations of the connected load. State-space technique in the frequency domain is applied to the resulting system of equations to facilitate the evaluation of parametric impacts on the system dynamic stability. Also included is a discussion on the system transient stability as affected by sudden changes in some key operating conditions. Some representative results are correlated with experimental data to verify the model and analytic formulation accuracies. Guidelines are given for ranges of the system parameters which will ensure an overall stable operation.
NASA Technical Reports Server (NTRS)
Cardamone, P.; Lechi, G. M.; Cavallin, A.; Marino, C. M.; Zanferrari, A.
1977-01-01
The results obtained in the study of linears derived from the analysis of LANDSAT 2 images recorded over Friuli during 1975 are described. Particular attention is devoted to the comparison of several passes in different bands, scales and photographic supports. Moreover reference is made to aerial photographic interpretation in selected sites and to the information obtained by laser techniques.
A linear shift-invariant image preprocessing technique for multispectral scanner systems
NASA Technical Reports Server (NTRS)
Mcgillem, C. D.; Riemer, T. E.
1973-01-01
A linear shift-invariant image preprocessing technique is examined which requires no specific knowledge of any parameter of the original image and which is sufficiently general to allow the effective radius of the composite imaging system to be arbitrarily shaped and reduced, subject primarily to the noise power constraint. In addition, the size of the point-spread function of the preprocessing filter can be arbitrarily controlled, thus minimizing truncation errors.
The use of Galerkin finite-element methods to solve mass-transport equations
Grove, David B.
1977-01-01
The partial differential equation that describes the transport and reaction of chemical solutes in porous media was solved using the Galerkin finite-element technique. These finite elements were superimposed over finite-difference cells used to solve the flow equation. Both convection and flow due to hydraulic dispersion were considered. Linear and Hermite cubic approximations (basis functions) provided satisfactory results: however, the linear functions were computationally more efficient for two-dimensional problems. Successive over relaxation (SOR) and iteration techniques using Tchebyschef polynomials were used to solve the sparce matrices generated using the linear and Hermite cubic functions, respectively. Comparisons of the finite-element methods to the finite-difference methods, and to analytical results, indicated that a high degree of accuracy may be obtained using the method outlined. The technique was applied to a field problem involving an aquifer contaminated with chloride, tritium, and strontium-90. (Woodard-USGS)
A Highly Linear and Wide Input Range Four-Quadrant CMOS Analog Multiplier Using Active Feedback
NASA Astrophysics Data System (ADS)
Huang, Zhangcai; Jiang, Minglu; Inoue, Yasuaki
Analog multipliers are one of the most important building blocks in analog signal processing circuits. The performance with high linearity and wide input range is usually required for analog four-quadrant multipliers in most applications. Therefore, a highly linear and wide input range four-quadrant CMOS analog multiplier using active feedback is proposed in this paper. Firstly, a novel configuration of four-quadrant multiplier cell is presented. Its input dynamic range and linearity are improved significantly by adding two resistors compared with the conventional structure. Then based on the proposed multiplier cell configuration, a four-quadrant CMOS analog multiplier with active feedback technique is implemented by two operational amplifiers. Because of both the proposed multiplier cell and active feedback technique, the proposed multiplier achieves a much wider input range with higher linearity than conventional structures. The proposed multiplier was fabricated by a 0.6µm CMOS process. Experimental results show that the input range of the proposed multiplier can be up to 5.6Vpp with 0.159% linearity error on VX and 4.8Vpp with 0.51% linearity error on VY for ±2.5V power supply voltages, respectively.
Giantsoudi, Drosoula; Seco, Joao; Eaton, Bree R; Simeone, F Joseph; Kooy, Hanne; Yock, Torunn I; Tarbell, Nancy J; DeLaney, Thomas F; Adams, Judith; Paganetti, Harald; MacDonald, Shannon M
2017-05-01
At present, proton craniospinal irradiation (CSI) for growing children is delivered to the whole vertebral body (WVB) to avoid asymmetric growth. We aimed to demonstrate the feasibility and potential clinical benefit of delivering vertebral body sparing (VBS) versus WVB CSI with passively scattered (PS) and intensity modulated proton therapy (IMPT) in growing children treated for medulloblastoma. Five plans were generated for medulloblastoma patients, who had been previously treated with CSI PS proton radiation therapy: (1) single posteroanterior (PA) PS field covering the WVB (PS-PA-WVB); (2) single PA PS field that included only the thecal sac in the target volume (PS-PA-VBS); (3) single PA IMPT field covering the WVB (IMPT-PA-WVB); (4) single PA IMPT field, target volume including thecal sac only (IMPT-PA-VBS); and (5) 2 posterior-oblique (-35°, +35°) IMPT fields, with the target volume including the thecal sac only (IMPT2F-VBS). For all cases, 23.4 Gy (relative biologic effectiveness [RBE]) was prescribed to 95% of the spinal canal. The dose, linear energy transfer, and variable-RBE-weighted dose distributions were calculated for all plans using the tool for particle simulation, version 2, Monte Carlo system. IMPT VBS techniques efficiently spared the anterior vertebral bodies (AVBs), even when accounting for potential higher variable RBE predicted by linear energy transfer distributions. Assuming an RBE of 1.1, the V10 Gy(RBE) decreased from 100% for the WVB techniques to 59.5% to 76.8% for the cervical, 29.9% to 34.6% for the thoracic, and 20.6% to 25.1% for the lumbar AVBs, and the V20 Gy(RBE) decreased from 99.0% to 17.8% to 20.0% for the cervical, 7.2% to 7.6% for the thoracic, and 4.0% to 4.6% for the lumbar AVBs when IMPT VBS techniques were applied. The corresponding percentages for the PS VBS technique were higher. Advanced proton techniques can sufficiently reduce the dose to the vertebral body and allow for vertebral column growth for children with central nervous system tumors requiring CSI. This was true even when considering variable RBE values. A clinical trial is planned for VBS to the thoracic and lumbosacral spine in growing children. Copyright © 2017 Elsevier Inc. All rights reserved.
Application of Local Linear Embedding to Nonlinear Exploratory Latent Structure Analysis
ERIC Educational Resources Information Center
Wang, Haonan; Iyer, Hari
2007-01-01
In this paper we discuss the use of a recent dimension reduction technique called Locally Linear Embedding, introduced by Roweis and Saul, for performing an exploratory latent structure analysis. The coordinate variables from the locally linear embedding describing the manifold on which the data reside serve as the latent variable scores. We…
Fleetwood, V A; Gross, K N; Alex, G C; Cortina, C S; Smolevitz, J B; Sarvepalli, S; Bakhsh, S R; Poirier, J; Myers, J A; Singer, M A; Orkin, B A
2017-03-01
Anastomotic leak (AL) increases costs and cancer recurrence. Studies show decreased AL with side-to-side stapled anastomosis (SSA), but none identify risk factors within SSAs. We hypothesized that stapler characteristics and closure technique of the common enterotomy affect AL rates. Retrospective review of bowel SSAs was performed. Data included stapler brand, staple line oversewing, and closure method (handsewn, HC; linear stapler [Barcelona technique], BT; transverse stapler, TX). Primary endpoint was AL. Statistical analysis included Fisher's test and logistic regression. 463 patients were identified, 58.5% BT, 21.2% HC, and 20.3% TX. Covidien staplers comprised 74.9%, Ethicon 18.1%. There were no differences between stapler types (Covidien 5.8%, Ethicon 6.0%). However, AL rates varied by common side closure (BT 3.7% vs. TX 10.6%, p = 0.017), remaining significant on multivariate analysis. Closure method of the common side impacts AL rates. Barcelona technique has fewer leaks than transverse stapled closure. Further prospective evaluation is recommended. Copyright © 2017. Published by Elsevier Inc.
Kipp, K.L.
1987-01-01
The Heat- and Soil-Transport Program (HST3D) simulates groundwater flow and associated heat and solute transport in three dimensions. The three governing equations are coupled through the interstitial pore velocity, the dependence of the fluid density on pressure, temperature, the solute-mass fraction , and the dependence of the fluid viscosity on temperature and solute-mass fraction. The solute transport equation is for only a single, solute species with possible linear equilibrium sorption and linear decay. Finite difference techniques are used to discretize the governing equations using a point-distributed grid. The flow-, heat- and solute-transport equations are solved , in turn, after a particle Gauss-reduction scheme is used to modify them. The modified equations are more tightly coupled and have better stability for the numerical solutions. The basic source-sink term represents wells. A complex well flow model may be used to simulate specified flow rate and pressure conditions at the land surface or within the aquifer, with or without pressure and flow rate constraints. Boundary condition types offered include specified value, specified flux, leakage, heat conduction, and approximate free surface, and two types of aquifer influence functions. All boundary conditions can be functions of time. Two techniques are available for solution of the finite difference matrix equations. One technique is a direct-elimination solver, using equations reordered by alternating diagonal planes. The other technique is an iterative solver, using two-line successive over-relaxation. A restart option is available for storing intermediate results and restarting the simulation at an intermediate time with modified boundary conditions. This feature also can be used as protection against computer system failure. Data input and output may be in metric (SI) units or inch-pound units. Output may include tables of dependent variables and parameters, zoned-contour maps, and plots of the dependent variables versus time. (Lantz-PTT)
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
NASA Astrophysics Data System (ADS)
Chavarette, Fábio Roberto; Balthazar, José Manoel; Felix, Jorge L. P.; Rafikov, Marat
2009-05-01
This paper analyzes the non-linear dynamics, with a chaotic behavior of a particular micro-electro-mechanical system. We used a technique of the optimal linear control for reducing the irregular (chaotic) oscillatory movement of the non-linear systems to a periodic orbit. We use the mathematical model of a (MEMS) proposed by Luo and Wang.
Malfroy Camine, V; Rüdiger, H A; Pioletti, D P; Terrier, A
2016-12-08
A good primary stability of cementless femoral stems is essential for the long-term success of total hip arthroplasty. Experimental measurement of implant micromotion with linear variable differential transformers is commonly used to assess implant primary stability in pre-clinical testing. But these measurements are often limited to a few distinct points at the interface. New techniques based on micro-computed tomography (micro-CT) have recently been introduced, such as Digital Volume Correlation (DVC) or markers-based approaches. DVC is however limited to measurement around non-metallic implants due to metal-induced imaging artifacts, and markers-based techniques are confined to a small portion of the implant. In this paper, we present a technique based on micro-CT imaging and radiopaque markers to provide the first full-field micromotion measurement at the entire bone-implant interface of a cementless femoral stem implanted in a cadaveric femur. Micromotion was measured during compression and torsion. Over 300 simultaneous measurement points were obtained. Micromotion amplitude ranged from 0 to 24µm in compression and from 0 to 49µm in torsion. Peak micromotion was distal in compression and proximal in torsion. The technique bias was 5.1µm and its repeatability standard deviation was 4µm. The method was thus highly reliable and compared well with results obtained with linear variable differential transformers (LVDTs) reported in the literature. These results indicate that this micro-CT based technique is perfectly relevant to observe local variations in primary stability around metallic implants. Possible applications include pre-clinical testing of implants and validation of patient-specific models for pre-operative planning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Alikhasi, Marzieh; Siadat, Hakimeh; Kharazifard, Mohammad Javad
2015-01-01
Objectives: The purpose of this study was to compare the accuracy of implant position transfer and surface detail reproduction using two impression techniques and materials. Materials and Methods: A metal model with two implants and three grooves of 0.25, 0.50 and 0.75 mm in depth on the flat superior surface of a die was fabricated. Ten regular-body polyether (PE) and 10 regular-body polyvinyl siloxane (PVS) impressions with square and conical transfer copings using open tray and closed tray techniques were made for each group. Impressions were poured with type IV stone, and linear and angular displacements of the replica heads were evaluated using a coordinate measuring machine (CMM). Also, accurate reproduction of the grooves was evaluated by a video measuring machine (VMM). These measurements were compared with the measurements calculated on the reference model that served as control, and the data were analyzed with two-way ANOVA and t-test at P= 0.05. Results: There was less linear displacement for PVS and less angular displacement for PE in closed-tray technique, and less linear displacement for PE in open tray technique (P<0.001). Also, the open tray technique showed less angular displacement with the use of PVS impression material. Detail reproduction accuracy was the same in all the groups (P>0.05). Conclusion: The open tray technique was more accurate using PE, and also both closed tray and open tray techniques had acceptable results with the use of PVS. The choice of impression material and technique made no significant difference in surface detail reproduction. PMID:27252761
A minimal approach to the scattering of physical massless bosons
NASA Astrophysics Data System (ADS)
Boels, Rutger H.; Luo, Hui
2018-05-01
Tree and loop level scattering amplitudes which involve physical massless bosons are derived directly from physical constraints such as locality, symmetry and unitarity, bypassing path integral constructions. Amplitudes can be projected onto a minimal basis of kinematic factors through linear algebra, by employing four dimensional spinor helicity methods or at its most general using projection techniques. The linear algebra analysis is closely related to amplitude relations, especially the Bern-Carrasco-Johansson relations for gluon amplitudes and the Kawai-Lewellen-Tye relations between gluons and graviton amplitudes. Projection techniques are known to reduce the computation of loop amplitudes with spinning particles to scalar integrals. Unitarity, locality and integration-by-parts identities can then be used to fix complete tree and loop amplitudes efficiently. The loop amplitudes follow algorithmically from the trees. A number of proof-of-concept examples are presented. These include the planar four point two-loop amplitude in pure Yang-Mills theory as well as a range of one loop amplitudes with internal and external scalars, gluons and gravitons. Several interesting features of the results are highlighted, such as the vanishing of certain basis coefficients for gluon and graviton amplitudes. Effective field theories are naturally and efficiently included into the framework. Dimensional regularisation is employed throughout; different regularisation schemes are worked out explicitly. The presented methods appear most powerful in non-supersymmetric theories in cases with relatively few legs, but with potentially many loops. For instance, in the introduced approach iterated unitarity cuts of four point amplitudes for non-supersymmetric gauge and gravity theories can be computed by matrix multiplication, generalising the so-called rung-rule of maximally supersymmetric theories. The philosophy of the approach to kinematics also leads to a technique to control colour quantum numbers of scattering amplitudes with matter, especially efficient in the adjoint and fundamental representations.
Non-linear Analysis of Scalp EEG by Using Bispectra: The Effect of the Reference Choice
Chella, Federico; D'Andrea, Antea; Basti, Alessio; Pizzella, Vittorio; Marzetti, Laura
2017-01-01
Bispectral analysis is a signal processing technique that makes it possible to capture the non-linear and non-Gaussian properties of the EEG signals. It has found various applications in EEG research and clinical practice, including the assessment of anesthetic depth, the identification of epileptic seizures, and more recently, the evaluation of non-linear cross-frequency brain functional connectivity. However, the validity and reliability of the indices drawn from bispectral analysis of EEG signals are potentially biased by the use of a non-neutral EEG reference. The present study aims at investigating the effects of the reference choice on the analysis of the non-linear features of EEG signals through bicoherence, as well as on the estimation of cross-frequency EEG connectivity through two different non-linear measures, i.e., the cross-bicoherence and the antisymmetric cross-bicoherence. To this end, four commonly used reference schemes were considered: the vertex electrode (Cz), the digitally linked mastoids, the average reference, and the Reference Electrode Standardization Technique (REST). The reference effects were assessed both in simulations and in a real EEG experiment. The simulations allowed to investigated: (i) the effects of the electrode density on the performance of the above references in the estimation of bispectral measures; and (ii) the effects of the head model accuracy in the performance of the REST. For real data, the EEG signals recorded from 10 subjects during eyes open resting state were examined, and the distortions induced by the reference choice in the patterns of alpha-beta bicoherence, cross-bicoherence, and antisymmetric cross-bicoherence were assessed. The results showed significant differences in the findings depending on the chosen reference, with the REST providing superior performance than all the other references in approximating the ideal neutral reference. In conclusion, this study highlights the importance of considering the effects of the reference choice in the interpretation and comparison of the results of bispectral analysis of scalp EEG. PMID:28559790
Porter, Marianne E; Ewoldt, Randy H; Long, John H
2016-09-15
During swimming in dogfish sharks, Squalus acanthias, both the intervertebral joints and the vertebral centra undergo significant strain. To investigate this system, unique among vertebrates, we cyclically bent isolated segments of 10 vertebrae and nine joints. For the first time in the biomechanics of fish vertebral columns, we simultaneously characterized non-linear elasticity and viscosity throughout the bending oscillation, extending recently proposed techniques for large-amplitude oscillatory shear (LAOS) characterization to large-amplitude oscillatory bending (LAOB). The vertebral column segments behave as non-linear viscoelastic springs. Elastic properties dominate for all frequencies and curvatures tested, increasing as either variable increases. Non-linearities within a bending cycle are most in evidence at the highest frequency, 2.0 Hz, and curvature, 5 m -1 Viscous bending properties are greatest at low frequencies and high curvatures, with non-linear effects occurring at all frequencies and curvatures. The range of mechanical behaviors includes that of springs and brakes, with smooth transitions between them that allow for continuously variable power transmission by the vertebral column to assist in the mechanics of undulatory propulsion. © 2016. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-07-01
We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.
Kiyono, Ken; Tsujimoto, Yutaka
2016-07-01
We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.
Advanced millimeter-wave security portal imaging techniques
NASA Astrophysics Data System (ADS)
Sheen, David M.; Bernacki, Bruce E.; McMakin, Douglas L.
2012-03-01
Millimeter-wave (mm-wave) imaging is rapidly gaining acceptance as a security tool to augment conventional metal detectors and baggage x-ray systems for passenger screening at airports and other secured facilities. This acceptance indicates that the technology has matured; however, many potential improvements can yet be realized. The authors have developed a number of techniques over the last several years including novel image reconstruction and display techniques, polarimetric imaging techniques, array switching schemes, and high-frequency high-bandwidth techniques. All of these may improve the performance of new systems; however, some of these techniques will increase the cost and complexity of the mm-wave security portal imaging systems. Reducing this cost may require the development of novel array designs. In particular, RF photonic methods may provide new solutions to the design and development of the sequentially switched linear mm-wave arrays that are the key element in the mm-wave portal imaging systems. Highfrequency, high-bandwidth designs are difficult to achieve with conventional mm-wave electronic devices, and RF photonic devices may be a practical alternative. In this paper, the mm-wave imaging techniques developed at PNNL are reviewed and the potential for implementing RF photonic mm-wave array designs is explored.
NASA Astrophysics Data System (ADS)
Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy
2017-07-01
As the popularity of non-linear manifold learning techniques such as kernel PCA and Laplacian Eigenmaps grows, vast improvements have been seen in many areas of data processing, including heterogeneous data fusion and integration. One problem with the non-linear techniques, however, is the lack of an easily calculable pre-image. Existence of such pre-image would allow visualization of the fused data not only in the embedded space, but also in the original data space. The ability to make such comparisons can be crucial for data analysts and other subject matter experts who are the end users of novel mathematical algorithms. In this paper, we propose a pre-image algorithm for Laplacian Eigenmaps. Our method offers major improvements over existing techniques, which allow us to address the problem of noisy inputs and the issue of how to calculate the pre-image of a point outside the convex hull of training samples; both of which have been overlooked in previous studies in this field. We conclude by showing that our pre-image algorithm, combined with feature space rotations, allows us to recover occluded pixels of an imaging modality based off knowledge of that image measured by heterogeneous modalities. We demonstrate this data recovery on heterogeneous hyperspectral (HS) cameras, as well as by recovering LIDAR measurements from HS data.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Equivalent reduced model technique development for nonlinear system dynamic response
NASA Astrophysics Data System (ADS)
Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet
2013-04-01
The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.
Assessing Spontaneous Combustion Instability with Nonlinear Time Series Analysis
NASA Technical Reports Server (NTRS)
Eberhart, C. J.; Casiano, M. J.
2015-01-01
Considerable interest lies in the ability to characterize the onset of spontaneous instabilities within liquid propellant rocket engine (LPRE) combustion devices. Linear techniques, such as fast Fourier transforms, various correlation parameters, and critical damping parameters, have been used at great length for over fifty years. Recently, nonlinear time series methods have been applied to deduce information pertaining to instability incipiency hidden in seemingly stochastic combustion noise. A technique commonly used in biological sciences known as the Multifractal Detrended Fluctuation Analysis has been extended to the combustion dynamics field, and is introduced here as a data analysis approach complementary to linear ones. Advancing, a modified technique is leveraged to extract artifacts of impending combustion instability that present themselves a priori growth to limit cycle amplitudes. Analysis is demonstrated on data from J-2X gas generator testing during which a distinct spontaneous instability was observed. Comparisons are made to previous work wherein the data were characterized using linear approaches. Verification of the technique is performed by examining idealized signals and comparing two separate, independently developed tools.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Santos, Frédéric; Guyomarc'h, Pierre; Bruzek, Jaroslav
2014-12-01
Accuracy of identification tools in forensic anthropology primarily rely upon the variations inherent in the data upon which they are built. Sex determination methods based on craniometrics are widely used and known to be specific to several factors (e.g. sample distribution, population, age, secular trends, measurement technique, etc.). The goal of this study is to discuss the potential variations linked to the statistical treatment of the data. Traditional craniometrics of four samples extracted from documented osteological collections (from Portugal, France, the U.S.A., and Thailand) were used to test three different classification methods: linear discriminant analysis (LDA), logistic regression (LR), and support vector machines (SVM). The Portuguese sample was set as a training model on which the other samples were applied in order to assess the validity and reliability of the different models. The tests were performed using different parameters: some included the selection of the best predictors; some included a strict decision threshold (sex assessed only if the related posterior probability was high, including the notion of indeterminate result); and some used an unbalanced sex-ratio. Results indicated that LR tends to perform slightly better than the other techniques and offers a better selection of predictors. Also, the use of a decision threshold (i.e. p>0.95) is essential to ensure an acceptable reliability of sex determination methods based on craniometrics. Although the Portuguese, French, and American samples share a similar sexual dimorphism, application of Western models on the Thai sample (that displayed a lower degree of dimorphism) was unsuccessful. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Participation of Employees and Students of the Faculty of Geodesy and Cartography in Polar Research
NASA Astrophysics Data System (ADS)
Pasik, Mariusz; Adamek, Artur; Rajner, Marcin; Kurczyński, Zdzisław; Pachuta, Andrzej; Woźniak, Marek; Bylina, Paweł; Próchniewicz, Dominik
2016-06-01
This year the Faculty of Geodesy and Cartography, Warsaw University of Technology celebrates its 95th jubilee, which provides an opportunity to present the Faculty's rich traditions in polar research. Employees and students of the faculty for almost 60 years have taken part in research expeditions to the polar circle. The article presents various studies typical of geodesy and cartography, as well as miscellany of possible measurement applications and geodetic techniques used to support interdisciplinary research. Wide range of geodetic techniques used in polar studies includes classic angular and linear surveys, photogrammetric techniques, gravimetric measurements, GNSS satellite techniques and satellite imaging. Those measurements were applied in glaciological, geological, geodynamic, botanical researches as well as in cartographic studies. Often they were used in activities aiming to ensure continuous functioning of Polish research stations on both hemispheres. This study is a short overview of thematic scope and selected research results conducted by our employees and students.
Applicability of active infrared thermography for screening of human breast: a numerical study
NASA Astrophysics Data System (ADS)
Dua, Geetika; Mulaveesala, Ravibabu
2018-03-01
Active infrared thermography is a fast, painless, noncontact, and noninvasive imaging method, complementary to mammography, ultrasound, and magnetic resonance imaging methods for early diagnosis of breast cancer. This technique plays an important role in early detection of breast cancer to women of all ages, including pregnant or nursing women, with different sizes of breast, irrespective of either fatty or dense breast. This proposed complementary technique makes use of infrared emission emanating from the breast. Emanating radiations from the surface of the breast under test are detected with an infrared camera to map the thermal gradients over it, in order to reveal hidden tumors inside it. One of the reliable active infrared thermographic technique, linear frequency modulated thermal wave imaging is adopted to detect tumors present inside the breast. Further, phase and amplitude images are constructed using frequency and time-domain data analysis schemes. Obtained results show the potential of the proposed technique for early diagnosis of breast cancer in fatty as well as dense breasts.
PAPR reduction in FBMC using an ACE-based linear programming optimization
NASA Astrophysics Data System (ADS)
van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan
2014-12-01
This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
Flynn, Priscilla; Acharya, Amit; Schwei, Kelsey; VanWormer, Jeffrey; Skrzypcak, Kaitlyn
2016-06-01
This primary aim of this study was to assess communication techniques used with low oral health literacy patients by dental hygienists in rural Wisconsin dental clinics. A secondary aim was to determine the utility of the survey instrument used in this study. A mixed methods study consisting of a cross-sectional survey, immediately followed by focus groups, was conducted among dental hygienists in the Marshfield Clinic (Wisconsin) service area. The survey quantified the routine use of 18 communication techniques previously shown to be effective with low oral health literacy patients. Linear regression was used to analyze the association between routine use of each communication technique and several indicator variables, including geographic practice region, oral health literacy familiarity, communication skills training and demographic indicators. Qualitative analyses included code mapping to the 18 communication techniques identified in the survey, and generating new codes based on discussion content. On average, the 38 study participants routinely used 6.3 communication techniques. Dental hygienists who used an oral health literacy assessment tool reported using significantly more communication techniques compared to those who did not use an oral health literacy assessment tool. Focus group results differed from survey responses as few dental hygienists stated familiarity with the term "oral health literacy." Motivational interviewing techniques and using an integrated electronic medical-dental record were additional communication techniques identified as useful with low oral health literacy patients. Dental hygienists in this study routinely used approximately one-third of the communication techniques recommended for low oral health literacy patients supporting the need for training on this topic. Based on focus group results, the survey used in this study warrants modification and psychometric testing prior to further use. Copyright © 2016 The American Dental Hygienists’ Association.
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-04-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.
On the effects of viscosity on the stability of a trailing-line vortex
NASA Technical Reports Server (NTRS)
Duck, Peter W.; Khorrami, Mehdi R.
1991-01-01
The linear stability of the Batchelor (1964) vortex is investigated. Particular emphasis is placed on modes found recently in a numerical study by Khorrami (1991). These modes have a number of features very distinct from those found previously for this vortex, including exhibiting small growth rates at large Reynolds numbers and susceptibility to destabilization by viscosity. These modes are described using asymptotic techniques, producing results which compare favorably with fully numerical results at large Reynolds numbers.
Simultaneous Luminescence Pressure and Temperature Mapping
NASA Technical Reports Server (NTRS)
Buck, Gregory M. (Inventor)
1998-01-01
A simultaneous luminescence pressure and temperature mapping system is developed including improved dye application techniques for surface temperature and pressure measurements from 5 torr to 1000 torr with possible upgrade to from 0.5 torr to several atmospheres with improved camera resolution. Adsorbed perylene dye on slip-cast silica is pressure (oxygen) sensitive and reusable to relatively high temperatures (-150 C). Adsorbed luminescence has an approximately linear color shift with temperature, which can be used for independent temperature mapping and brightness pressure calibration with temperature.
Application of differential transformation method for solving dengue transmission mathematical model
NASA Astrophysics Data System (ADS)
Ndii, Meksianis Z.; Anggriani, Nursanti; Supriatna, Asep K.
2018-03-01
The differential transformation method (DTM) is a semi-analytical numerical technique which depends on Taylor series and has application in many areas including Biomathematics. The aim of this paper is to employ the differential transformation method (DTM) to solve system of non-linear differential equations for dengue transmission mathematical model. Analytical and numerical solutions are determined and the results are compared to that of Runge-Kutta method. We found a good agreement between DTM and Runge-Kutta method.
A multilevel control approach for a modular structured space platform
NASA Technical Reports Server (NTRS)
Chichester, F. D.; Borelli, M. T.
1981-01-01
A three axis mathematical representation of a modular assembled space platform consisting of interconnected discrete masses, including a deployable truss module, was derived for digital computer simulation. The platform attitude control system as developed to provide multilevel control utilizing the Gauss-Seidel second level formulation along with an extended form of linear quadratic regulator techniques. The objectives of the multilevel control are to decouple the space platform's spatial axes and to accommodate the modification of the platform's configuration for each of the decoupled axes.
Simultaneous Luminescence Pressure and Temperature Mapping System
NASA Technical Reports Server (NTRS)
Buck, Gregory M. (Inventor)
1995-01-01
A simultaneous luminescence pressure and temperature mapping system is developed including improved dye application techniques for surface temperature and pressure measurements from 5 torr to 1000 torr with possible upgrade to from 0.5 torr to several atmospheres with improved camera resolution. Adsorbed perylene dye on slip-cast silica is pressure (oxygen) sensitive and reusable to relatively high temperatures (approximately 150 C). Adsorbed luminescence has an approximately linear color shift with temperature, which can be used for independent temperature mapping and brightness pressure calibration with temperature.
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1976-01-01
A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.
Configurations and calibration methods for passive sampling techniques.
Ouyang, Gangfeng; Pawliszyn, Janusz
2007-10-19
Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.
New Approach To Hour-By-Hour Weather Forecast
NASA Astrophysics Data System (ADS)
Liao, Q. Q.; Wang, B.
2017-12-01
Fine hourly forecast in single station weather forecast is required in many human production and life application situations. Most previous MOS (Model Output Statistics) which used a linear regression model are hard to solve nonlinear natures of the weather prediction and forecast accuracy has not been sufficient at high temporal resolution. This study is to predict the future meteorological elements including temperature, precipitation, relative humidity and wind speed in a local region over a relatively short period of time at hourly level. By means of hour-to-hour NWP (Numeral Weather Prediction)meteorological field from Forcastio (https://darksky.net/dev/docs/forecast) and real-time instrumental observation including 29 stations in Yunnan and 3 stations in Tianjin of China from June to October 2016, predictions are made of the 24-hour hour-by-hour ahead. This study presents an ensemble approach to combine the information of instrumental observation itself and NWP. Use autoregressive-moving-average (ARMA) model to predict future values of the observation time series. Put newest NWP products into the equations derived from the multiple linear regression MOS technique. Handle residual series of MOS outputs with autoregressive (AR) model for the linear property presented in time series. Due to the complexity of non-linear property of atmospheric flow, support vector machine (SVM) is also introduced . Therefore basic data quality control and cross validation makes it able to optimize the model function parameters , and do 24 hours ahead residual reduction with AR/SVM model. Results show that AR model technique is better than corresponding multi-variant MOS regression method especially at the early 4 hours when the predictor is temperature. MOS-AR combined model which is comparable to MOS-SVM model outperform than MOS. Both of their root mean square error and correlation coefficients for 2 m temperature are reduced to 1.6 degree Celsius and 0.91 respectively. The forecast accuracy of 24- hour forecast deviation no more than 2 degree Celsius is 78.75 % for MOS-AR model and 81.23 % for AR model.
Conical Lens for 5-Inch/54 Gun Launched Missile
1981-06-01
Propagation, Interferenceand Diffraction of Light, 2nd ed. (revised), p. 121-124, Pergamon Press, 1964. 10. Anton , Howard, Elementary Linear Algebra , p. 1-21...equations is nonlinear in x, but is linear in the coefficients. Therefore, the techniques of linear algebra can be used on equation (F-13). The method...This thesis assumes the air to be homogenous, isotropic, linear , time indepen- dent (HILT) and free of shock waves in order to investigate the
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
Murnick, Daniel E; Dogru, Ozgur; Ilkmen, Erhan
2008-07-01
We show a new ultrasensitive laser-based analytical technique, intracavity optogalvanic spectroscopy, allowing extremely high sensitivity for detection of (14)C-labeled carbon dioxide. Capable of replacing large accelerator mass spectrometers, the technique quantifies attomoles of (14)C in submicrogram samples. Based on the specificity of narrow laser resonances coupled with the sensitivity provided by standing waves in an optical cavity and detection via impedance variations, limits of detection near 10(-15) (14)C/(12)C ratios are obtained. Using a 15-W (14)CO2 laser, a linear calibration with samples from 10(-15) to >1.5 x 10(-12) in (14)C/(12)C ratios, as determined by accelerator mass spectrometry, is demonstrated. Possible applications include microdosing studies in drug development, individualized subtherapeutic tests of drug metabolism, carbon dating and real time monitoring of atmospheric radiocarbon. The method can also be applied to detection of other trace entities.
Transcranial electric and magnetic stimulation: technique and paradigms.
Paulus, Walter; Peterchev, Angel V; Ridding, Michael
2013-01-01
Transcranial electrical and magnetic stimulation techniques encompass a broad physical variety of stimuli, ranging from static magnetic fields or direct current stimulation to pulsed magnetic or alternating current stimulation with an almost infinite number of possible stimulus parameters. These techniques are continuously refined by new device developments, including coil or electrode design and flexible control of the stimulus waveforms. They allow us to influence brain function acutely and/or by inducing transient plastic after-effects in a range from minutes to days. Manipulation of stimulus parameters such as pulse shape, intensity, duration, and frequency, and location, size, and orientation of the electrodes or coils enables control of the immediate effects and after-effects. Physiological aspects such as stimulation at rest or during attention or activation may alter effects dramatically, as does neuropharmacological drug co-application. Non-linear relationships between stimulus parameters and physiological effects have to be taken into account. © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.
1980-01-01
Small-signal modeling techniques are used in a system stability analysis of a breadboard version of a complete functional electrical power system. The system consists of a regulated switching dc-to-dc converter, a solar-cell-array simulator, a solar-array EMI filter, battery chargers and linear shunt regulators. Loss mechanisms in the converter power stage, including switching-time effects in the semiconductor elements, are incorporated into the modeling procedure to provide an accurate representation of the system without requiring frequency-domain measurements to determine the damping factor. The small-signal system model is validated by the use of special measurement techniques which are adapted to the poor signal-to-noise ratio encountered in switching-mode systems. The complete electrical power system with the solar-array EMI filter is shown to be stable over the intended range of operation.
Yager-Elorriaga, D. A.; Steiner, A. M.; Patel, S. G.; ...
2015-11-19
In this study, we describe a technique for fabricating ultrathin foils in cylindrical geometry for liner-plasma implosion experiments using sub-MA currents. Liners are formed by wrapping a 400 nm, rectangular strip of aluminum foil around a dumbbell-shaped support structure with a non-conducting center rod, so that the liner dimensions are 1 cm in height, 6.55 mm in diameter, and 400 nm in thickness. The liner-plasmas are imploded by discharging ~600 kA with ~200 ns rise time using a 1 MA linear transformer driver, and the resulting implosions are imaged four times per shot using laser-shadowgraphy at 532 nm. As amore » result, this technique enables the study of plasma implosion physics, including the magneto Rayleigh-Taylor, sausage, and kink instabilities on initially solid, imploding metallic liners with university-scale pulsed power machines.« less
NASA Astrophysics Data System (ADS)
Yager-Elorriaga, D. A.; Steiner, A. M.; Patel, S. G.; Jordan, N. M.; Lau, Y. Y.; Gilgenbach, R. M.
2015-11-01
In this work, we describe a technique for fabricating ultrathin foils in cylindrical geometry for liner-plasma implosion experiments using sub-MA currents. Liners are formed by wrapping a 400 nm, rectangular strip of aluminum foil around a dumbbell-shaped support structure with a non-conducting center rod, so that the liner dimensions are 1 cm in height, 6.55 mm in diameter, and 400 nm in thickness. The liner-plasmas are imploded by discharging ˜600 kA with ˜200 ns rise time using a 1 MA linear transformer driver, and the resulting implosions are imaged four times per shot using laser-shadowgraphy at 532 nm. This technique enables the study of plasma implosion physics, including the magneto Rayleigh-Taylor, sausage, and kink instabilities on initially solid, imploding metallic liners with university-scale pulsed power machines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yager-Elorriaga, D. A.; Steiner, A. M.; Patel, S. G.
In this study, we describe a technique for fabricating ultrathin foils in cylindrical geometry for liner-plasma implosion experiments using sub-MA currents. Liners are formed by wrapping a 400 nm, rectangular strip of aluminum foil around a dumbbell-shaped support structure with a non-conducting center rod, so that the liner dimensions are 1 cm in height, 6.55 mm in diameter, and 400 nm in thickness. The liner-plasmas are imploded by discharging ~600 kA with ~200 ns rise time using a 1 MA linear transformer driver, and the resulting implosions are imaged four times per shot using laser-shadowgraphy at 532 nm. As amore » result, this technique enables the study of plasma implosion physics, including the magneto Rayleigh-Taylor, sausage, and kink instabilities on initially solid, imploding metallic liners with university-scale pulsed power machines.« less
Advanced Multispectral Scanner (AMS) study. [aircraft remote sensing
NASA Technical Reports Server (NTRS)
1978-01-01
The status of aircraft multispectral scanner technology was accessed in order to develop preliminary design specifications for an advanced instrument to be used for remote sensing data collection by aircraft in the 1980 time frame. The system designed provides a no-moving parts multispectral scanning capability through the exploitation of linear array charge coupled device technology and advanced electronic signal processing techniques. Major advantages include: 10:1 V/H rate capability; 120 deg FOV at V/H = 0.25 rad/sec; 1 to 2 rad resolution; high sensitivity; large dynamic range capability; geometric fidelity; roll compensation; modularity; long life; and 24 channel data acquisition capability. The field flattening techniques of the optical design allow wide field view to be achieved at fast f/nos for both the long and short wavelength regions. The digital signal averaging technique permits maximization of signal to noise performance over the entire V/H rate range.
Discriminant forest classification method and system
Chen, Barry Y.; Hanley, William G.; Lemmond, Tracy D.; Hiller, Lawrence J.; Knapp, David A.; Mugge, Marshall J.
2012-11-06
A hybrid machine learning methodology and system for classification that combines classical random forest (RF) methodology with discriminant analysis (DA) techniques to provide enhanced classification capability. A DA technique which uses feature measurements of an object to predict its class membership, such as linear discriminant analysis (LDA) or Andersen-Bahadur linear discriminant technique (AB), is used to split the data at each node in each of its classification trees to train and grow the trees and the forest. When training is finished, a set of n DA-based decision trees of a discriminant forest is produced for use in predicting the classification of new samples of unknown class.
A modal parameter extraction procedure applicable to linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Kurdila, A. J.; Craig, R. R., Jr.
1985-01-01
Modal analysis has emerged as a valuable tool in many phases of the engineering design process. Complex vibration and acoustic problems in new designs can often be remedied through use of the method. Moreover, the technique has been used to enhance the conceptual understanding of structures by serving to verify analytical models. A new modal parameter estimation procedure is presented. The technique is applicable to linear, time-invariant systems and accommodates multiple input excitations. In order to provide a background for the derivation of the method, some modal parameter extraction procedures currently in use are described. Key features implemented in the new technique are elaborated upon.
NASA Technical Reports Server (NTRS)
Antar, B. N.
1976-01-01
A numerical technique is presented for locating the eigenvalues of two point linear differential eigenvalue problems. The technique is designed to search for complex eigenvalues belonging to complex operators. With this method, any domain of the complex eigenvalue plane could be scanned and the eigenvalues within it, if any, located. For an application of the method, the eigenvalues of the Orr-Sommerfeld equation of the plane Poiseuille flow are determined within a specified portion of the c-plane. The eigenvalues for alpha = 1 and R = 10,000 are tabulated and compared for accuracy with existing solutions.
Flexible multibody simulation of automotive systems with non-modal model reduction techniques
NASA Astrophysics Data System (ADS)
Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter
2012-12-01
The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.
Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G
2012-03-01
Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (P<0.001) from consumption of the guideline based diets (0.508±0.003 μg/kg/day) than from consumption of the Western diets (0.441±0.003 μg/kg/day). Guideline based diets contained less acrylamide contributed by French fries and potato chips than Western diets. Overall acrylamide intake, however, was higher in guideline based diets as a result of more frequent breakfast cereal intake. This is believed to be the first example of a risk assessment that combines probabilistic techniques with linear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bizzell, R. M.; Feiveson, A. H.; Hall, F. G.; Bauer, M. E.; Davis, B. J.; Malila, W. A.; Rice, D. P.
1975-01-01
The CITARS was an experiment designed to quantitatively evaluate crop identification performance for corn and soybeans in various environments using a well-defined set of automatic data processing (ADP) techniques. Each technique was applied to data acquired to recognize and estimate proportions of corn and soybeans. The CITARS documentation summarizes, interprets, and discusses the crop identification performances obtained using (1) different ADP procedures; (2) a linear versus a quadratic classifier; (3) prior probability information derived from historic data; (4) local versus nonlocal recognition training statistics and the associated use of preprocessing; (5) multitemporal data; (6) classification bias and mixed pixels in proportion estimation; and (7) data with differnt site characteristics, including crop, soil, atmospheric effects, and stages of crop maturity.
Non-intrusive flow measurements on a reentry vehicle
NASA Technical Reports Server (NTRS)
Miles, R. B.; Satavicca, D. A.; Zimmermann, G. M.
1983-01-01
This study evaluates the utility of various non-intrusive techniques for the measurement of the flow field on the windward side of the Space Shuttle or a similar re-entry vehicle. Included are linear (Rayleigh, Raman, Mie, Laser Doppler Velocimetry, Resonant Doppler Velocimetry) and nonlinear (Coherent Anti-Stokes Raman, Laser Induced Fluorescence) light scattering, electron beam fluorescence, thermal emission and mass spectroscopy. Flow field properties are taken from a nonequilibrium flow model by Shinn, Moss and Simmonds at NASA Langley. Conclusions are, when possible, based on quantitative scaling of known laboratory results to the conditions projected. Detailed discussion with researchers in the field contributed further to these conclusions and provided valuable insights regarding the experimental feasibility of each of the techniques.
Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed
2017-11-01
The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.
NASA Astrophysics Data System (ADS)
Marazzi, Marco; Gattuso, Hugo; Monari, Antonio; Assfeld, Xavier
2018-04-01
Bio-macromolecules as DNA, lipid membranes and (poly)peptides are essential compounds at the core of biological systems. The development of techniques and methodologies for their characterization is therefore necessary and of utmost interest, even though difficulties can be experienced due to their intrinsic complex nature. Among these methods, spectroscopies, relying on optical properties are especially important to determine their macromolecular structures and behaviors, as well as the possible interactions and reactivity with external dyes – often drugs or pollutants – that can (photo)sensitize the bio-macromolecule leading to eventual chemical modifications, thus damages. In this review, we will focus on the theoretical simulation of electronic spectroscopies of bio-macromolecules, considering their secondary structure and including their interaction with different kind of (photo)sensitizers. Namely, absorption, emission and electronic circular dichroism (CD) spectra are calculated and compared with the available experimental data. Non-linear properties will be also taken into account by two-photon absorption, a highly promising technique (i) to enhance absorption in the red and infra-red windows and (ii) to enhance spatial resolution. Methodologically, the implications of using implicit and explicit solvent, coupled to quantum and thermal samplings of the phase space, will be addressed. Especially, hybrid quantum mechanics/ molecular mechanics (QM/MM) methods are explored for a comparison with solely QM methods, in order to address the necessity to consider an accurate description of environmental effects on spectroscopic properties of biological systems.
Algebraic multigrid preconditioners for two-phase flow in porous media with phase transitions
NASA Astrophysics Data System (ADS)
Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel
2018-04-01
Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switching technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. We also show that the strategy is efficient and scales optimally with problem size.
A morphological perceptron with gradient-based learning for Brazilian stock market forecasting.
Araújo, Ricardo de A
2012-04-01
Several linear and non-linear techniques have been proposed to solve the stock market forecasting problem. However, a limitation arises from all these techniques and is known as the random walk dilemma (RWD). In this scenario, forecasts generated by arbitrary models have a characteristic one step ahead delay with respect to the time series values, so that, there is a time phase distortion in stock market phenomena reconstruction. In this paper, we propose a suitable model inspired by concepts in mathematical morphology (MM) and lattice theory (LT). This model is generically called the increasing morphological perceptron (IMP). Also, we present a gradient steepest descent method to design the proposed IMP based on ideas from the back-propagation (BP) algorithm and using a systematic approach to overcome the problem of non-differentiability of morphological operations. Into the learning process we have included a procedure to overcome the RWD, which is an automatic correction step that is geared toward eliminating time phase distortions that occur in stock market phenomena. Furthermore, an experimental analysis is conducted with the IMP using four complex non-linear problems of time series forecasting from the Brazilian stock market. Additionally, two natural phenomena time series are used to assess forecasting performance of the proposed IMP with other non financial time series. At the end, the obtained results are discussed and compared to results found using models recently proposed in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.
Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo
2011-03-04
Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rosen, David L.; Pendleton, J. David
1995-09-01
Light scattered from optically active spheres was theoretically analyzed for biodetection. The circularly polarized signal of near-forward scattering from circularly dichroic spheres was calculated. Both remote and point biodetection were considered. The analysis included the effect of a circular aperture and beam block at the detector. If the incident light is linearly polarized, a false signal would limit the sensitivity of the biodetector. If the incident light is randomly polarized, shot noise would limit the sensitivity. Suggested improvements to current techniques include a beam block, precise angular measurements, randomly polarized light, index-matching fluid, and larger apertures for large particles.
Overview of the CHarring Ablator Response (CHAR) Code
NASA Technical Reports Server (NTRS)
Amar, Adam J.; Oliver, A. Brandon; Kirk, Benjamin S.; Salazar, Giovanni; Droba, Justin
2016-01-01
An overview of the capabilities of the CHarring Ablator Response (CHAR) code is presented. CHAR is a one-, two-, and three-dimensional unstructured continuous Galerkin finite-element heat conduction and ablation solver with both direct and inverse modes. Additionally, CHAR includes a coupled linear thermoelastic solver for determination of internal stresses induced from the temperature field and surface loading. Background on the development process, governing equations, material models, discretization techniques, and numerical methods is provided. Special focus is put on the available boundary conditions including thermochemical ablation, surface-to-surface radiation exchange, and flowfield coupling. Finally, a discussion of ongoing development efforts is presented.
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, Brendan; Polizzi, Eric
2013-03-01
The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.
Linearized gravity in terms of differential forms
NASA Astrophysics Data System (ADS)
Baykal, Ahmet; Dereli, Tekin
2017-01-01
A technique to linearize gravitational field equations is developed in which the perturbation metric coefficients are treated as second rank, symmetric, 1-form fields belonging to the Minkowski background spacetime by using the exterior algebra of differential forms.
Response statistics of rotating shaft with non-linear elastic restoring forces by path integration
NASA Astrophysics Data System (ADS)
Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael
2017-07-01
Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.
Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng
2014-06-20
We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.
NASA Technical Reports Server (NTRS)
Carlson, F. M.; Chin, L.-Y.; Fripp, A. L.; Crouch, R. K.
1982-01-01
The effect of solid-liquid interface shape on lateral solute segregation during steady-state unidirectional solidification of a binary mixture is calculated under the assumption of no convection in the liquid. A finite element technique is employed to compute the concentration field in the liquid and the lateral segregation in the solid with a curved boundary between the liquid and solid phases. The computational model is constructed assuming knowledge of the solid-liquid interface shape; no attempt is made to relate this shape to the thermal field. The influence of interface curvature on the lateral compositional variation is investigated over a range of system parameters including diffusivity, growth speed, distribution coefficient, and geometric factors of the system. In the limiting case of a slightly nonplanar interface, numerical results from the finite element technique are in good agreement with the analytical solutions of Coriell and Sekerka obtained by using linear theory. For the general case of highly non-planar interface shapes, the linear theory fails and the concentration field in the liquid as well as the lateral solute segregation in the solid can be calculated by using the finite element method.
NASA Astrophysics Data System (ADS)
Abdelkhalek, M. M.
2009-05-01
Numerical results are presented for heat and mass transfer effect on hydromagnetic flow of a moving permeable vertical surface. An analysis is performed to study the momentum, heat and mass transfer characteristics of MHD natural convection flow over a moving permeable surface. The surface is maintained at linear temperature and concentration variations. The non-linear coupled boundary layer equations were transformed and the resulting ordinary differential equations were solved by perturbation technique [Aziz A, Na TY. Perturbation methods in heat transfer. Berlin: Springer-Verlag; 1984. p. 1-184; Kennet Cramer R, Shih-I Pai. Magneto fluid dynamics for engineers and applied physicists 1973;166-7]. The solution is found to be dependent on several governing parameter, including the magnetic field strength parameter, Prandtl number, Schmidt number, buoyancy ratio and suction/blowing parameter, a parametric study of all the governing parameters is carried out and representative results are illustrated to reveal a typical tendency of the solutions. Numerical results for the dimensionless velocity profiles, the temperature profiles, the concentration profiles, the local friction coefficient and the local Nusselt number are presented for various combinations of parameters.
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Paraschivoiu, Marius
1998-01-01
We present a finite element technique for the efficient generation of lower and upper bounds to outputs which are linear functionals of the solutions to the incompressible Stokes equations in two space dimensions; the finite element discretization is effected by Crouzeix-Raviart elements, the discontinuous pressure approximation of which is central to our approach. The bounds are based upon the construction of an augmented Lagrangian: the objective is a quadratic "energy" reformulation of the desired output; the constraints are the finite element equilibrium equations (including the incompressibility constraint), and the intersubdomain continuity conditions on velocity. Appeal to the dual max-min problem for appropriately chosen candidate Lagrange multipliers then yields inexpensive bounds for the output associated with a fine-mesh discretization; the Lagrange multipliers are generated by exploiting an associated coarse-mesh approximation. In addition to the requisite coarse-mesh calculations, the bound technique requires solution only of local subdomain Stokes problems on the fine-mesh. The method is illustrated for the Stokes equations, in which the outputs of interest are the flowrate past, and the lift force on, a body immersed in a channel.
Ackerman, L K; Noonan, G O; Begley, T H
2009-12-01
The ambient ionization technique direct analysis in real time (DART) was characterized and evaluated for the screening of food packaging for the presence of packaging additives using a benchtop mass spectrometer (MS). Approximate optimum conditions were determined for 13 common food-packaging additives, including plasticizers, anti-oxidants, colorants, grease-proofers, and ultraviolet light stabilizers. Method sensitivity and linearity were evaluated using solutions and characterized polymer samples. Additionally, the response of a model additive (di-ethyl-hexyl-phthalate) was examined across a range of sample positions, DART, and MS conditions (temperature, voltage and helium flow). Under optimal conditions, molecular ion (M+H+) was the major ion for most additives. Additive responses were highly sensitive to sample and DART source orientation, as well as to DART flow rates, temperatures, and MS inlet voltages, respectively. DART-MS response was neither consistently linear nor quantitative in this setting, and sensitivity varied by additive. All additives studied were rapidly identified in multiple food-packaging materials by DART-MS/MS, suggesting this technique can be used to screen food packaging rapidly. However, method sensitivity and quantitation requires further study and improvement.
Postprocessing techniques for 3D non-linear structures
NASA Technical Reports Server (NTRS)
Gallagher, Richard S.
1987-01-01
How graphics postprocessing techniques are currently used to examine the results of 3-D nonlinear analyses, some new techniques which take advantage of recent technology, and how these results relate to both the finite element model and its geometric parent are reviewed.
Kaya, Ugur; Çolak, Abdurrahim; Becit, Necip; Ceviz, Munacettin; Kocak, Hikmet
2018-01-01
The aim of this study was to evaluate early clinical outcomes and echocardiographic measurements of the left ventricle in patients who underwent left ventricular aneurysm repair using two different techniques associated to myocardial revascularization. Eighty-nine patients (74 males, 15 females; mean age 58±8.4 years; range: 41 to 80 years) underwent post-infarction left ventricular aneurysm repair and myocardial revascularization performed between 1996 and 2016. Ventricular reconstruction was performed using endoventricular circular patch plasty (Dor procedure) (n=48; group A) or linear repair technique (n=41; group B). Multi-vessel disease in 55 (61.7%) and isolated left anterior descending (LAD) disease in 34 (38.2%) patients were identified. Five (5.6%) patients underwent aneurysmectomy alone, while the remaining 84 (94.3%) patients had aneurysmectomy with bypass. The mean number of grafts per patient was 2.1±1.2 with the Dor procedure and 2.9±1.3 with the linear repair technique. In-hospital mortality occurred in 4.1% and 7.3% in group A and group B, respectively (P>0.05). The results of our study demonstrate that post-infarction left ventricular aneurysm repair can be performed with both techniques with acceptable surgical risk and with satisfactory hemodynamic improvement.
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
Electro-Optical Sensing Apparatus and Method for Characterizing Free-Space Electromagnetic Radiation
Zhang, Xi-Cheng; Libelo, Louis Francis; Wu, Qi
1999-09-14
Apparatus and methods for characterizing free-space electromagnetic energy, and in particular, apparatus/method suitable for real-time two-dimensional far-infrared imaging applications are presented. The sensing technique is based on a non-linear coupling between a low-frequency electric field and a laser beam in an electro-optic crystal. In addition to a practical counter-propagating sensing technique, a co-linear approach is described which provides longer radiated field--optical beam interaction length, thereby making imaging applications practical.
Balance Contrast Enhancement using piecewise linear stretching
NASA Astrophysics Data System (ADS)
Rahavan, R. V.; Govil, R. C.
1993-04-01
Balance Contrast Enhancement is one of the techniques employed to produce color composites with increased color contrast. It equalizes the three images used for color composition in range and mean. This results in a color composite with large variation in hue. Here, it is shown that piecewise linear stretching can be used for performing the Balance Contrast Enhancement. In comparison with the Balance Contrast Enhancement Technique using parabolic segment as transfer function (BCETP), the method presented here is algorithmically simple, constraint-free and produces comparable results.
Clark, Catharine H; Aird, Edwin G A; Bolton, Steve; Miles, Elizabeth A; Nisbet, Andrew; Snaith, Julia A D; Thomas, Russell A S; Venables, Karen; Thwaites, David I
2015-01-01
Dosimetry audit plays an important role in the development and safety of radiotherapy. National and large scale audits are able to set, maintain and improve standards, as well as having the potential to identify issues which may cause harm to patients. They can support implementation of complex techniques and can facilitate awareness and understanding of any issues which may exist by benchmarking centres with similar equipment. This review examines the development of dosimetry audit in the UK over the past 30 years, including the involvement of the UK in international audits. A summary of audit results is given, with an overview of methodologies employed and lessons learnt. Recent and forthcoming more complex audits are considered, with a focus on future needs including the arrival of proton therapy in the UK and other advanced techniques such as four-dimensional radiotherapy delivery and verification, stereotactic radiotherapy and MR linear accelerators. The work of the main quality assurance and auditing bodies is discussed, including how they are working together to streamline audit and to ensure that all radiotherapy centres are involved. Undertaking regular external audit motivates centres to modernize and develop techniques and provides assurance, not only that radiotherapy is planned and delivered accurately but also that the patient dose delivered is as prescribed.
Aird, Edwin GA; Bolton, Steve; Miles, Elizabeth A; Nisbet, Andrew; Snaith, Julia AD; Thomas, Russell AS; Venables, Karen; Thwaites, David I
2015-01-01
Dosimetry audit plays an important role in the development and safety of radiotherapy. National and large scale audits are able to set, maintain and improve standards, as well as having the potential to identify issues which may cause harm to patients. They can support implementation of complex techniques and can facilitate awareness and understanding of any issues which may exist by benchmarking centres with similar equipment. This review examines the development of dosimetry audit in the UK over the past 30 years, including the involvement of the UK in international audits. A summary of audit results is given, with an overview of methodologies employed and lessons learnt. Recent and forthcoming more complex audits are considered, with a focus on future needs including the arrival of proton therapy in the UK and other advanced techniques such as four-dimensional radiotherapy delivery and verification, stereotactic radiotherapy and MR linear accelerators. The work of the main quality assurance and auditing bodies is discussed, including how they are working together to streamline audit and to ensure that all radiotherapy centres are involved. Undertaking regular external audit motivates centres to modernize and develop techniques and provides assurance, not only that radiotherapy is planned and delivered accurately but also that the patient dose delivered is as prescribed. PMID:26329469
Using field-particle correlations to study auroral electron acceleration in the LAPD
NASA Astrophysics Data System (ADS)
Schroeder, J. W. R.; Howes, G. G.; Skiff, F.; Kletzing, C. A.; Carter, T. A.; Vincena, S.; Dorfman, S.
2017-10-01
Resonant nonlinear Alfvén wave-particle interactions are believed to contribute to the acceleration of auroral electrons. Experiments in the Large Plasma Device (LAPD) at UCLA have been performed with the goal of providing the first direct measurement of this nonlinear process. Recent progress includes a measurement of linear fluctuations of the electron distribution function associated with the production of inertial Alfvén waves in the LAPD. These linear measurements have been analyzed using the field-particle correlation technique to study the nonlinear transfer of energy between the Alfvén wave electric fields and the electron distribution function. Results of this analysis indicate collisions alter the resonant signature of the field-particle correlation, and implications for resonant Alfvénic electron acceleration in the LAPD are considered. This work was supported by NSF, DOE, and NASA.
Detection of liquid hazardous molecules using linearly focused Raman spectroscopy
NASA Astrophysics Data System (ADS)
Cho, Soo Gyeong; Chung, Jin Hyuk
2013-05-01
In security, it is an important issue to analyze hazardous materials in sealed bottles. Particularly, prompt nondestructive checking of sealed liquid bottles in a very short time at the checkpoints of crowded malls, stadiums, or airports is of particular importance to prevent probable terrorist attack using liquid explosives. Aiming to design and fabricate a detector for liquid explosives, we have used linearly focused Raman spectroscopy to analyze liquid materials in transparent or semi-transparent bottles without opening their caps. Continuous lasers with 532 nm wavelength and 58 mW/130 mW beam energy have been used for the Raman spectroscopy. Various hazardous materials including flammable liquids and explosive materials have successfully been distinguished and identified within a couple of seconds. We believe that our technique will be one of suitable methods for fast screening of liquid materials in sealed bottles.
Determining association constants from titration experiments in supramolecular chemistry.
Thordarson, Pall
2011-03-01
The most common approach for quantifying interactions in supramolecular chemistry is a titration of the guest to solution of the host, noting the changes in some physical property through NMR, UV-Vis, fluorescence or other techniques. Despite the apparent simplicity of this approach, there are several issues that need to be carefully addressed to ensure that the final results are reliable. This includes the use of non-linear rather than linear regression methods, careful choice of stoichiometric binding model, the choice of method (e.g., NMR vs. UV-Vis) and concentration of host, the application of advanced data analysis methods such as global analysis and finally the estimation of uncertainties and confidence intervals for the results obtained. This tutorial review will give a systematic overview of all these issues-highlighting some of the key messages herein with simulated data analysis examples.
Assessing non-uniqueness: An algebraic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, Don W.
Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.
Integration of system identification and finite element modelling of nonlinear vibrating structures
NASA Astrophysics Data System (ADS)
Cooper, Samson B.; DiMaio, Dario; Ewins, David J.
2018-03-01
The Finite Element Method (FEM), Experimental modal analysis (EMA) and other linear analysis techniques have been established as reliable tools for the dynamic analysis of engineering structures. They are often used to provide solutions to small and large structures and other variety of cases in structural dynamics, even those exhibiting a certain degree of nonlinearity. Unfortunately, when the nonlinear effects are substantial or the accuracy of the predicted response is of vital importance, a linear finite element model will generally prove to be unsatisfactory. As a result, the validated linear FE model requires further enhancement so that it can represent and predict the nonlinear behaviour exhibited by the structure. In this paper, a pragmatic approach to integrating test-based system identification and FE modelling of a nonlinear structure is presented. This integration is based on three different phases: the first phase involves the derivation of an Underlying Linear Model (ULM) of the structure, the second phase includes experiment-based nonlinear identification using measured time series and the third phase covers augmenting the linear FE model and experimental validation of the nonlinear FE model. The proposed case study is demonstrated on a twin cantilever beam assembly coupled with a flexible arch shaped beam. In this case, polynomial-type nonlinearities are identified and validated with force-controlled stepped-sine test data at several excitation levels.
Temporal Gain Correction for X-Ray Calorimeter Spectrometers
NASA Technical Reports Server (NTRS)
Porter, F. S.; Chiao, M. P.; Eckart, M. E.; Fujimoto, R.; Ishisaki, Y.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; McCammon, D.; Mitsuda, K.
2016-01-01
Calorimetric X-ray detectors are very sensitive to their environment. The boundary conditions can have a profound effect on the gain including heat sink temperature, the local radiation temperature, bias, and the temperature of the readout electronics. Any variation in the boundary conditions can cause temporal variations in the gain of the detector and compromise both the energy scale and the resolving power of the spectrometer. Most production X-ray calorimeter spectrometers, both on the ground and in space, have some means of tracking the gain as a function of time, often using a calibration spectral line. For small gain changes, a linear stretch correction is often sufficient. However, the detectors are intrinsically non-linear and often the event analysis, i.e., shaping, optimal filters etc., add additional non-linearity. Thus for large gain variations or when the best possible precision is required, a linear stretch correction is not sufficient. Here, we discuss a new correction technique based on non-linear interpolation of the energy-scale functions. Using Astro-HSXS calibration data, we demonstrate that the correction can recover the X-ray energy to better than 1 part in 104 over the entire spectral band to above 12 keV even for large-scale gain variations. This method will be used to correct any temporal drift of the on-orbit per-pixel gain using on-board calibration sources for the SXS instrument on the Astro-H observatory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon
2013-01-01
We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less
Lapsiwala, Samir B; Anderson, Paul A; Oza, Ashish; Resnick, Daniel K
2006-03-01
We performed a biomechanical comparison of several C1 to C2 fixation techniques including crossed laminar (intralaminar) screw fixation, anterior C1 to C2 transarticular screw fixation, C1 to 2 pedicle screw fixation, and posterior C1 to C2 transarticular screw fixation. Eight cadaveric cervical spines were tested intact and after dens fracture. Four different C1 to C2 screw fixation techniques were tested. Posterior transarticular and pedicle screw constructs were tested twice, once with supplemental sublaminar cables and once without cables. The specimens were tested in three modes of loading: flexion-extension, lateral bending, and axial rotation. All tests were performed in load and torque control. Pure bending moments of 2 nm were applied in flexion-extension and lateral bending, whereas a 1 nm moment was applied in axial rotation. Linear displacements were recorded from extensometers rigidly affixed to the C1 and C2 vertebrae. Linear displacements were reduced to angular displacements using trigonometry. Adding cable fixation results in a stiffer construct for posterior transarticular screws. The addition of cables did not affect the stiffness of C1 to C2 pedicle screw constructs. There were no significant differences in stiffness between anterior and posterior transarticular screw techniques, unless cable fixation was added to the posterior construct. All three posterior screw constructs with supplemental cable fixation provide equal stiffness with regard to flexion-extension and axial rotation. C1 lateral mass-C2 intralaminar screw fixation restored resistance to lateral bending but not to the same degree as the other screw fixation techniques. All four screw fixation techniques limit motion at the C1 to 2 articulation. The addition of cable fixation improves resistance to flexion and extension for posterior transarticular screw fixation.
Evaluating forest management policies by parametric linear programing
Daniel I. Navon; Richard J. McConnen
1967-01-01
An analytical and simulation technique, parametric linear programing explores alternative conditions and devises an optimal management plan for each condition. Its application in solving policy-decision problems in the management of forest lands is illustrated in an example.
Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems
NASA Technical Reports Server (NTRS)
Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.
2016-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.
Implementation of Nonlinear Control Laws for an Optical Delay Line
NASA Technical Reports Server (NTRS)
Hench, John J.; Lurie, Boris; Grogan, Robert; Johnson, Richard
2000-01-01
This paper discusses the implementation of a globally stable nonlinear controller algorithm for the Real-Time Interferometer Control System Testbed (RICST) brassboard optical delay line (ODL) developed for the Interferometry Technology Program at the Jet Propulsion Laboratory. The control methodology essentially employs loop shaping to implement linear control laws. while utilizing nonlinear elements as means of ameliorating the effects of actuator saturation in its coarse, main, and vernier stages. The linear controllers were implemented as high-order digital filters and were designed using Bode integral techniques to determine the loop shape. The nonlinear techniques encompass the areas of exact linearization, anti-windup control, nonlinear rate limiting and modal control. Details of the design procedure are given as well as data from the actual mechanism.
Williams, Calum; Rughoobur, Girish; Flewitt, Andrew J; Wilkinson, Timothy D
2016-11-10
A single-step fabrication method is presented for ultra-thin, linearly variable optical bandpass filters (LVBFs) based on a metal-insulator-metal arrangement using modified evaporation deposition techniques. This alternate process methodology offers reduced complexity and cost in comparison to conventional techniques for fabricating LVBFs. We are able to achieve linear variation of insulator thickness across a sample, by adjusting the geometrical parameters of a typical physical vapor deposition process. We demonstrate LVBFs with spectral selectivity from 400 to 850 nm based on Ag (25 nm) and MgF2 (75-250 nm). Maximum spectral transmittance is measured at ∼70% with a Q-factor of ∼20.
NASA Astrophysics Data System (ADS)
Zhu, Ran; Hui, Ming; Shen, Dongya; Zhang, Xiupu
2017-02-01
In this paper, dual wavelength linearization (DWL) technique is studied to suppress odd and even order nonlinearities simultaneously in a Mach-Zehnder modulator (MZM) modulated radio-over-fiber (RoF) transmission system. A theoretical model is given to analyze the DWL employed for MZM. In a single-tone test, the suppressions of the second order harmonic distortion (HD2) and third order harmonic distortion (HD3) at the same time are experimentally verified at different bias voltages of the MZM. The measured spurious-free dynamic ranges (SFDRs) with respect to the HD2 and HD3 are improved simultaneously compared to using a single laser. The output P1 dB is also improved by the DWL technique. Moreover, a WiFi signal is transmitted in the RoF system to test the linearization for broadband signal. The result shows that more than 1 dB improvement of the error vector magnitude (EVM) is obtained by the DWL technique.
NASA Technical Reports Server (NTRS)
Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.
1985-01-01
A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.
Use of reconstructed 3D VMEC equilibria to match effects of toroidally rotating discharges in DIII-D
Wingen, Andreas; Wilcox, Robert S.; Cianciosa, Mark R.; ...
2016-10-13
Here, a technique for tokamak equilibrium reconstructions is used for multiple DIII-D discharges, including L-mode and H-mode cases when weakly 3D fieldsmore » $$\\left(\\delta B/B\\sim {{10}^{-3}}\\right)$$ are applied. The technique couples diagnostics to the non-linear, ideal MHD equilibrium solver VMEC, using the V3FIT code, to find the most likely 3D equilibrium based on a suite of measurements. It is demonstrated that V3FIT can be used to find non-linear 3D equilibria that are consistent with experimental measurements of the plasma response to very weak 3D perturbations, as well as with 2D profile measurements. Observations at DIII-D show that plasma rotation larger than 20 krad s –1 changes the relative phase between the applied 3D fields and the measured plasma response. Discharges with low averaged rotation (10 krad s –1) and peaked rotation profiles (40 krad s –1) are reconstructed. Similarities and differences to forward modeled VMEC equilibria, which do not include rotational effects, are shown. Toroidal phase shifts of up to $${{30}^{\\circ}}$$ are found between the measured and forward modeled plasma responses at the highest values of rotation. The plasma response phases of reconstructed equilibra on the other hand match the measured ones. This is the first time V3FIT has been used to reconstruct weakly 3D tokamak equilibria.« less
Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Wall, Johnm W.
2011-01-01
In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.
Trescott, Peter C.; Pinder, George Francis; Larson, S.P.
1976-01-01
The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.
Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa
2008-01-01
This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.
Analysis of Vlbi, Slr and GPS Site Position Time Series
NASA Astrophysics Data System (ADS)
Angermann, D.; Krügel, M.; Meisel, B.; Müller, H.; Tesmer, V.
Conventionally the IERS terrestrial reference frame (ITRF) is realized by the adoption of a set of epoch coordinates and linear velocities for a set of global tracking stations. Due to the remarkable progress of the space geodetic observation techniques (e.g. VLBI, SLR, GPS) the accuracy and consistency of the ITRF increased continuously. The accuracy achieved today is mainly limited by technique-related systematic errors, which are often poorly characterized or quantified. Therefore it is essential to analyze the individual techniques' solutions with respect to systematic differences, models, parameters, datum definition, etc. Main subject of this presentation is the analysis of GPS, SLR and VLBI time series of site positions. The investigations are based on SLR and VLBI solutions computed at DGFI with the software systems DOGS (SLR) and OCCAM (VLBI). The GPS time series are based on weekly IGS station coordinates solutions. We analyze the time series with respect to the issues mentioned above. In particular we characterize the noise in the time series, identify periodic signals, and investigate non-linear effects that complicate the assignment of linear velocities for global tracking sites. One important aspect is the comparison of results obtained by different techniques at colocation sites.
Active learning for semi-supervised clustering based on locally linear propagation reconstruction.
Chang, Chin-Chun; Lin, Po-Yi
2015-03-01
The success of semi-supervised clustering relies on the effectiveness of side information. To get effective side information, a new active learner learning pairwise constraints known as must-link and cannot-link constraints is proposed in this paper. Three novel techniques are developed for learning effective pairwise constraints. The first technique is used to identify samples less important to cluster structures. This technique makes use of a kernel version of locally linear embedding for manifold learning. Samples neither important to locally linear propagation reconstructions of other samples nor on flat patches in the learned manifold are regarded as unimportant samples. The second is a novel criterion for query selection. This criterion considers not only the importance of a sample to expanding the space coverage of the learned samples but also the expected number of queries needed to learn the sample. To facilitate semi-supervised clustering, the third technique yields inferred must-links for passing information about flat patches in the learned manifold to semi-supervised clustering algorithms. Experimental results have shown that the learned pairwise constraints can capture the underlying cluster structures and proven the feasibility of the proposed approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case
NASA Astrophysics Data System (ADS)
Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann
2017-04-01
Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.
General technique for discrete retardation-modulation polarimetry
NASA Technical Reports Server (NTRS)
Saxena, Indu
1993-01-01
The general theory and rigorous solutions of the Stokes parameters of light of a new technique in time-resolved ellipsometry are outlined. In this technique the phase of the linear retarder is stepped over three discrete values over a time interval for which the Stokes vector is determined. The technique has an advantage over synchronous detection techniques, as it can be implemented as a digitizable system.
Aircraft flight test trajectory control
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Walker, R. A.
1988-01-01
Two design techniques for linear flight test trajectory controllers (FTTCs) are described: Eigenstructure assignment and the minimum error excitation technique. The two techniques are used to design FTTCs for an F-15 aircraft model for eight different maneuvers at thirty different flight conditions. An evaluation of the FTTCs is presented.
A New Pattern of Getting Nasty Number in Graphical Method
NASA Astrophysics Data System (ADS)
Sumathi, P.; Indhumathi, N.
2018-04-01
This paper proposed a new technique of getting nasty numbers using graphical method in linear programming problem and it has been proved for various Linear programming problems. And also some characterisation of nasty numbers is discussed in this paper.
NASA Technical Reports Server (NTRS)
Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek
1995-01-01
High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1985-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
[A novel quantitative approach to study dynamic anaerobic process at micro scale].
Zhang, Zhong-Liang; Wu, Jing; Jiang, Jian-Kai; Jiang, Jie; Li, Huai-Zhi
2012-11-01
Anaerobic digestion is attracting more and more interests because of its advantages such as low cost and recovery of clean energy etc. In order to overcome the drawbacks of the existed methods to study the dynamic anaerobic process, a novel microscopical quantitative approach at the granule level was developed combining both the microdevice and the quantitative image analysis techniques. This experiment displayed the process and characteristics of the gas production at static state for the first time and the results indicated that the method was of satisfactory repeatability. The gas production process at static state could be divided into three stages including rapid linear increasing stage, decelerated increasing stage and slow linear increasing stage. The rapid linear increasing stage was long and the biogas rate was high under high initial organic loading rate. The results showed that it was feasible to make the anaerobic process to be carried out in the microdevice; furthermore this novel method was reliable and could clearly display the dynamic process of the anaerobic reaction at the micro scale. The results are helpful to understand the anaerobic process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
A preliminary study of the thermal measurement with nMAG gel dosimeter by MRI
NASA Astrophysics Data System (ADS)
Chuang, Chun-Chao; Shao, Chia-Ho; Shih, Cheng-Ting; Yeh, Yu-Chen; Lu, Cheng-Chang; Chuang, Keh-Shih; Wu, Jay
2014-11-01
The methacrylic acid (nMAG) gel dosimeter is an effective tool for 3-dimensional quality assurance of radiation therapy. In addition to radiation induced polymerization effects, the nMAG gel also responds to temperature variation. In this study, we proposed a new method to evaluate the thermal response in thermal therapy using nMAG gel and magnetic resonance image (MRI) scans. Several properties of nMAG have been investigated including the R2 relaxation rate, temperature sensitivity, and temperature linearity of the thermal dose response. nMAG was heated by the double-boiling method in the range of 37-45 °C. MRI scans were performed with the head coil receiver. The temperature to R2 response curve was analyzed and simple linear regression was performed with an R-square value of 0.9835. The measured data showed a well inverse linear relationship between R2 and temperature. We conclude that the nMAG polymer gel dosimeter shows great potential as a technique to evaluate the temperature rise during thermal surgery.
A study of fluid-structure problems
NASA Astrophysics Data System (ADS)
Lam, Dennis Kang-Por
The stability of structures with and without fluid load is investigated. A method is developed for determining the fluid load in terms of added structural mass. Finite element methods are employed to study the buckling of a cylindrical shell under axial compression and liquid storage tanks under hydrodynamic load. Both linear and nonlinear analyses are performed. Diamond modes are found to be the possible postbuckling shapes of the cylindrical shell. Local buckling including elephant-foot buckle and diamond buckle are found for the liquid storage tank models. Comparison between the linear and nonlinear results indicates a substantial difference in buckling mode shapes, though the buckling loads are close to each other. The method for determining the hydrodynamic mass is applied to the impeller stage of a centrifugal pump. The method is based on a linear perturbation technique which assumes that the disturbance in the flow boundaries and velocities caused by the motion of the structure is small. A potential method is used to estimate the velocity flow field. The hydrodynamic mass is then obtained by calculating the total force which results from the pressure induced by a perturbation of the structure.
Multidimensional custom-made non-linear microscope: from ex-vivo to in-vivo imaging
NASA Astrophysics Data System (ADS)
Cicchi, R.; Sacconi, L.; Jasaitis, A.; O'Connor, R. P.; Massi, D.; Sestini, S.; de Giorgi, V.; Lotti, T.; Pavone, F. S.
2008-09-01
We have built a custom-made multidimensional non-linear microscope equipped with a combination of several non-linear laser imaging techniques involving fluorescence lifetime, multispectral two-photon and second-harmonic generation imaging. The optical system was mounted on a vertical honeycomb breadboard in an upright configuration, using two galvo-mirrors relayed by two spherical mirrors as scanners. A double detection system working in non-descanning mode has allowed both photon counting and a proportional regime. This experimental setup offering high spatial (micrometric) and temporal (sub-nanosecond) resolution has been used to image both ex-vivo and in-vivo biological samples, including cells, tissues, and living animals. Multidimensional imaging was used to spectroscopically characterize human skin lesions, as malignant melanoma and naevi. Moreover, two-color detection of two photon excited fluorescence was applied to in-vivo imaging of living mice intact neocortex, as well as to induce neuronal microlesions by femtosecond laser burning. The presented applications demonstrate the capability of the instrument to be used in a wide range of biological and biomedical studies.
Characterization of linear viscoelastic anti-vibration rubber mounts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lodhia, B.B.; Esat, I.I.
1996-11-01
The aim of this paper is to identify the dynamic characteristics that are evident in linear viscoelastic rubber mountings. The characteristics under consideration included the static and dynamic stiffnesses with the variation of amplitude and frequency of the sinusoidal excitation. Test samples of various rubber mix were tested and compared to reflect magnitude of dependency on composition. In the light of the results, the validity and effectiveness of a mathematical model was investigated and a suitable technique based on the Tschoegl and Emri Algorithm, was utilized to fit the model to the experimental data. The model which was chosen, wasmore » an extension of the basic Maxwell model, which is based on linear spring and dashpot elements in series and parallel called the Wiechert model. It was found that the extent to which the filler and vulcanisate was present in the rubber sample, did have a great effect on the static stiffness characteristics, and the storage and loss moduli. The Tschoegl and Emri Algorithm was successfully utilized in modelling the frequency response of the samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas
2014-08-01
The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems andmore » then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.« less
A new look at the robust control of discrete-time Markov jump linear systems
NASA Astrophysics Data System (ADS)
Todorov, M. G.; Fragoso, M. D.
2016-03-01
In this paper, we make a foray in the role played by a set of four operators on the study of robust H2 and mixed H2/H∞ control problems for discrete-time Markov jump linear systems. These operators appear in the study of mean square stability for this class of systems. By means of new linear matrix inequality (LMI) characterisations of controllers, which include slack variables that, to some extent, separate the robustness and performance objectives, we introduce four alternative approaches to the design of controllers which are robustly stabilising and at the same time provide a guaranteed level of H2 performance. Since each operator provides a different degree of conservatism, the results are unified in the form of an iterative LMI technique for designing robust H2 controllers, whose convergence is attained in a finite number of steps. The method yields a new way of computing mixed H2/H∞ controllers, whose conservatism decreases with iteration. Two numerical examples illustrate the applicability of the proposed results for the control of a small unmanned aerial vehicle, and for an underactuated robotic arm.
NASA Astrophysics Data System (ADS)
Wu, Cheng; Zhen Yu, Jian
2018-03-01
Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions.
Gene doctoring: a method for recombineering in laboratory and pathogenic Escherichia coli strains.
Lee, David J; Bingle, Lewis E H; Heurlier, Karin; Pallen, Mark J; Penn, Charles W; Busby, Stephen J W; Hobman, Jon L
2009-12-09
Homologous recombination mediated by the lambda-Red genes is a common method for making chromosomal modifications in Escherichia coli. Several protocols have been developed that differ in the mechanisms by which DNA, carrying regions homologous to the chromosome, are delivered into the cell. A common technique is to electroporate linear DNA fragments into cells. Alternatively, DNA fragments are generated in vivo by digestion of a donor plasmid with a nuclease that does not cleave the host genome. In both cases the lambda-Red gene products recombine homologous regions carried on the linear DNA fragments with the chromosome. We have successfully used both techniques to generate chromosomal mutations in E. coli K-12 strains. However, we have had limited success with these lambda-Red based recombination techniques in pathogenic E. coli strains, which has led us to develop an enhanced protocol for recombineering in such strains. Our goal was to develop a high-throughput recombineering system, primarily for the coupling of genes to epitope tags, which could also be used for deletion of genes in both pathogenic and K-12 E. coli strains. To that end we have designed a series of donor plasmids for use with the lambda-Red recombination system, which when cleaved in vivo by the I-SceI meganuclease generate a discrete linear DNA fragment, allowing for C-terminal tagging of chromosomal genes with a 6xHis, 3xFLAG, 4xProteinA or GFP tag or for the deletion of chromosomal regions. We have enhanced existing protocols and technologies by inclusion of a cassette conferring kanamycin resistance and, crucially, by including the sacB gene on the donor plasmid, so that all but true recombinants are counter-selected on kanamycin and sucrose containing media, thus eliminating the need for extensive screening. This method has the added advantage of limiting the exposure of cells to the potential damaging effects of the lambda-Red system, which can lead to unwanted secondary alterations to the chromosome. We have developed a counter-selective recombineering technique for epitope tagging or for deleting genes in E. coli. We have demonstrated the versatility of the technique by modifying the chromosome of the enterohaemorrhagic O157:H7 (EHEC), uropathogenic CFT073 (UPEC), enteroaggregative O42 (EAEC) and enterotoxigenic H10407 (ETEC) E. coli strains as well as in K-12 laboratory strains.
Sim, Kok Swee; NorHisham, Syafiq
2016-11-01
A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Mohammed, Nazmi A; Solaiman, Mohammad; Aly, Moustafa H
2014-10-10
In this work, various dispersion compensation methods are designed and evaluated to search for a cost-effective technique with remarkable dispersion compensation and a good pulse shape. The techniques consist of different chirp functions applied to a tanh fiber Bragg grating (FBG), a dispersion compensation fiber (DCF), and a DCF merged with an optimized linearly chirped tanh FBG (joint technique). The techniques are evaluated using a standard 10 Gb/s optical link over a 100 km long haul. The linear chirp function is the most appropriate choice of chirping function, with a pulse width reduction percentage (PWRP) of 75.15%, lower price, and poor pulse shape. The DCF yields an enhanced PWRP of 93.34% with a better pulse quality; however, it is the most costly of the evaluated techniques. Finally, the joint technique achieved the optimum PWRP (96.36%) among all the evaluated techniques and exhibited a remarkable pulse shape; it is less costly than the DCF, but more expensive than the chirped tanh FBG.
Modulation Transfer Function (MTF) measurement techniques for lenses and linear detector arrays
NASA Technical Reports Server (NTRS)
Schnabel, J. J., Jr.; Kaishoven, J. E., Jr.; Tom, D.
1984-01-01
Application is the determination of the Modulation Transfer Function (MTF) for linear detector arrays. A system set up requires knowledge of the MTF of the imaging lens. Procedure for this measurement is described for standard optical lab equipment. Given this information, various possible approaches to MTF measurement for linear arrays is described. The knife edge method is then described in detail.
NASA Technical Reports Server (NTRS)
Gasiewski, Albin J.
1992-01-01
This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).
Techniques for measurement of thoracoabdominal asynchrony
NASA Technical Reports Server (NTRS)
Prisk, G. Kim; Hammer, J.; Newth, Christopher J L.
2002-01-01
Respiratory motion measured by respiratory inductance plethysmography often deviates from the sinusoidal pattern assumed in the traditional Lissajous figure (loop) analysis used to determine thoraco-abdominal asynchrony, or phase angle phi. We investigated six different time-domain methods of measuring phi, using simulated data with sinusoidal and triangular waveforms, phase shifts of 0-135 degrees, and 10% noise. The techniques were then used on data from 11 lightly anesthetized rhesus monkeys (Macaca mulatta; 7.6 +/- 0.8 kg; 5.7 +/- 0.5 years old), instrumented with a respiratory inductive plethysmograph, and subjected to increasing levels of inspiratory resistive loading ranging from 5-1,000 cmH(2)O. L(-1). sec(-1).The best results were obtained from cross-correlation and maximum linear correlation, with errors less than approximately 5 degrees from the actual phase angle in the simulated data. The worst performance was produced by the loop analysis, which in some cases was in error by more than 30 degrees. Compared to correlation, other analysis techniques performed at an intermediate level. Maximum linear correlation and cross-correlation produced similar results on the data collected from monkeys (SD of the difference, 4.1 degrees ) but all other techniques had a high SD of the difference compared to the correlation techniques.We conclude that phase angles are best measured using cross-correlation or maximum linear correlation, techniques that are independent of waveform shape, and robust in the presence of noise. Copyright 2002 Wiley-Liss, Inc.
USDA-ARS?s Scientific Manuscript database
Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...
Reduced order modeling of fluid/structure interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph
2009-11-01
This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preservesmore » numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.« less
Koeda, Keisuke; Chiba, Takehiro; Noda, Hironobu; Nishinari, Yutaka; Segawa, Takenori; Akiyama, Yuji; Iwaya, Takeshi; Nishizuka, Satoshi; Nitta, Hiroyuki; Otsuka, Koki; Sasaki, Akira
2016-05-01
Laparoscopy-assisted pylorus-preserving gastrectomy has been increasingly reported as a treatment for early gastric cancer located in the middle third of the stomach because of its low invasiveness and preservation of pyloric function. Advantages of a totally laparoscopic approach to distal gastrectomy, including small wound size, minimal invasiveness, and safe anastomosis, have been recently reported. Here, we introduce a new procedure for intracorporeal gastro-gastrostomy combined with totally laparoscopic pylorus-preserving gastrectomy (TLPPG). The stomach is transected after sufficient lymphadenectomy with preservation of infrapyloric vessels and vagal nerves. The proximal stomach is first transected near the Demel line, and the distal side is transected 4 to 5 cm from the pyloric ring. To create end-to-end gastro-gastrostomy, the posterior wall of the anastomosis is stapled with a linear stapler and the anterior wall is made by manual suturing intracorporeally. We retrospectively assessed the postoperative surgical outcomes via medical records. The primary endpoint in the present study is safety. Sixteen patients underwent TLPPG with intracorporeal reconstruction. All procedures were successfully performed without any intraoperative complications. The mean operative time was 275 min, with mean blood loss of 21 g. With the exception of one patient who had gastric stasis, 15 patients were discharged uneventfully between postoperative days 8 and 11. Our novel hybrid technique for totally intracorporeal end-to-end anastomosis was performed safely without mini-laparotomy. This technique requires prospective validation.
Using EIGER for Antenna Design and Analysis
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Khayat, Michael; Kennedy, Timothy F.; Fink, Patrick W.
2007-01-01
EIGER (Electromagnetic Interactions GenERalized) is a frequency-domain electromagnetics software package that is built upon a flexible framework, designed using object-oriented techniques. The analysis methods used include moment method solutions of integral equations, finite element solutions of partial differential equations, and combinations thereof. The framework design permits new analysis techniques (boundary conditions, Green#s functions, etc.) to be added to the software suite with a sensible effort. The code has been designed to execute (in serial or parallel) on a wide variety of platforms from Intel-based PCs and Unix-based workstations. Recently, new potential integration scheme s that avoid singularity extraction techniques have been added for integral equation analysis. These new integration schemes are required for facilitating the use of higher-order elements and basis functions. Higher-order elements are better able to model geometrical curvature using fewer elements than when using linear elements. Higher-order basis functions are beneficial for simulating structures with rapidly varying fields or currents. Results presented here will demonstrate curren t and future capabilities of EIGER with respect to analysis of installed antenna system performance in support of NASA#s mission of exploration. Examples include antenna coupling within an enclosed environment and antenna analysis on electrically large manned space vehicles.