Sample records for non-linear programming techniques

  1. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  2. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  3. A non-linear programming approach to the computer-aided design of regulators using a linear-quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1985-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.

  4. Linear Programming and Its Application to Pattern Recognition Problems

    NASA Technical Reports Server (NTRS)

    Omalley, M. J.

    1973-01-01

    Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.

  5. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  6. Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian

    2011-06-01

    Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.

  7. Utilizing the Zero-One Linear Programming Constraints to Draw Multiple Sets of Matched Samples from a Non-Treatment Population as Control Groups for the Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar

    2005-01-01

    The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…

  8. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  9. Hybrid Genetic Agorithms and Line Search Method for Industrial Production Planning with Non-Linear Fitness Function

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian; Barsoum, Nader

    2008-10-01

    Many engineering, science, information technology and management optimization problems can be considered as non linear programming real world problems where the all or some of the parameters and variables involved are uncertain in nature. These can only be quantified using intelligent computational techniques such as evolutionary computation and fuzzy logic. The main objective of this research paper is to solve non linear fuzzy optimization problem where the technological coefficient in the constraints involved are fuzzy numbers which was represented by logistic membership functions by using hybrid evolutionary optimization approach. To explore the applicability of the present study a numerical example is considered to determine the production planning for the decision variables and profit of the company.

  10. Spectroscopic investigations using density functional theory on 2-methoxy- 4(phenyliminomethyl)phenol: A non linear optical material

    NASA Astrophysics Data System (ADS)

    Hijas, K. M.; Madan Kumar, S.; Byrappa, K.; Geethakrishnan, T.; Jeyaram, S.; Nagalakshmi, R.

    2018-03-01

    Single crystals of 2-methoxy-4(phenyliminomethyl)phenol were grown from ethanol by slow evaporation solution growth technique. Single crystal X-ray diffraction experiment reveals the crystallization in orthorhombic system having non-centrosymmetric space group C2221. Geometrical optimization by density functional theory method was carried out using Gaussian program and compared with experimental results. Detailed experimental and theoretical vibrational analyses were carried out and the results were correlated to find close agreement. Thermal analyses show the material is thermally stable with a melting point of 159 °C. Natural bond orbital analysis was carried out to explain charge transfer interactions through hydrogen bonding. Relatively smaller HOMO-LUMO band gap favors the non linear optical activity of the molecule. Natural population analysis and molecular electrostatic potential calculations visualize the charge distribution in an isolated molecule. Calculated first-order molecular hyperpolarizability and preliminary second harmonic generation test carried out using Kurtz-Perry technique establish 2-methoxy-4(phenyliminomethyl)phenol crystal as a good non linear optical material. Z-scan proposes the material for reverse saturable absorption.

  11. Finite-horizon differential games for missile-target interception system using adaptive dynamic programming with input constraints

    NASA Astrophysics Data System (ADS)

    Sun, Jingliang; Liu, Chunsheng

    2018-01-01

    In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.

  12. Comparisons of linear and nonlinear pyramid schemes for signal and image processing

    NASA Astrophysics Data System (ADS)

    Morales, Aldo W.; Ko, Sung-Jea

    1997-04-01

    Linear filters banks are being used extensively in image and video applications. New research results in wavelet applications for compression and de-noising are constantly appearing in the technical literature. On the other hand, non-linear filter banks are also being used regularly in image pyramid algorithms. There are some inherent advantages in using non-linear filters instead of linear filters when non-Gaussian processes are present in images. However, a consistent way of comparing performance criteria between these two schemes has not been fully developed yet. In this paper a recently discovered tool, sample selection probabilities, is used to compare the behavior of linear and non-linear filters. In the conversion from weights of order statistics (OS) filters to coefficients of the impulse response is obtained through these probabilities. However, the reverse problem: the conversion from coefficients of the impulse response to the weights of OS filters is not yet fully understood. One of the reasons for this difficulty is the highly non-linear nature of the partitions and generating function used. In the present paper the problem is posed as an optimization of integer linear programming subject to constraints directly obtained from the coefficients of the impulse response. Although the technique to be presented in not completely refined, it certainly appears to be promising. Some results will be shown.

  13. Travel Demand Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Southworth, Frank; Garrow, Dr. Laurie

    This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming,more » and agent-based microsimulation.« less

  14. Menu-Driven Solver Of Linear-Programming Problems

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  15. Ranking Forestry Investments With Parametric Linear Programming

    Treesearch

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  16. Overcoming learning barriers through knowledge management.

    PubMed

    Dror, Itiel E; Makany, Tamas; Kemp, Jonathan

    2011-02-01

    The ability to learn highly depends on how knowledge is managed. Specifically, different techniques for note-taking utilize different cognitive processes and strategies. In this paper, we compared dyslexic and control participants when using linear and non-linear note-taking. All our participants were professionals working in the banking and financial sector. We examined comprehension, accuracy, mental imagery & complexity, metacognition, and memory. We found that participants with dyslexia, when using a non-linear note-taking technique outperformed the control group using linear note-taking and matched the performance of the control group using non-linear note-taking. These findings emphasize how different knowledge management techniques can avoid some of the barriers to learners. Copyright © 2010 John Wiley & Sons, Ltd.

  17. Object matching using a locally affine invariant and linear programming techniques.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  18. Prediction of Scour below Flip Bucket using Soft Computing Techniques

    NASA Astrophysics Data System (ADS)

    Azamathulla, H. Md.; Ab Ghani, Aminuddin; Azazi Zakaria, Nor

    2010-05-01

    The accurate prediction of the depth of scour around hydraulic structure (trajectory spillways) has been based on the experimental studies and the equations developed are mainly empirical in nature. This paper evaluates the performance of the soft computing (intelligence) techiques, Adaptive Neuro-Fuzzy System (ANFIS) and Genetic expression Programming (GEP) approach, in prediction of scour below a flip bucket spillway. The results are very promising, which support the use of these intelligent techniques in prediction of highly non-linear scour parameters.

  19. Non-destructive imaging of spinor Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Samson, E.; Vinit, Anshuman; Raman, Chandra

    2013-05-01

    We present a non-destructive differential imaging technique that enables the observation of the spatial distribution of the magnetization in a spinor Bose-Einstein condensate (BEC) through a Faraday rotation protocol. In our procedure, we utilize a linearly polarized, far-detuned laser beam as our imaging probe, and upon interaction with the condensate, the beam's polarization direction undergoes Faraday rotation. A differential measurement of the orthogonal polarization components of the rotated beam provides a spatial map of the net magnetization density within the BEC. The non-destructive aspect of this method allows for continuous imaging of the condensate. This imaging technique will prove useful in experimental BEC studies, such as spatially resolved magnetometry using ultracold atoms, and non-destructive imaging of non-equilibrium behavior of antiferromagnetic spinor condensates. This work was supported by the DARPA QuASAR program through a grant from ARO.

  20. Linear Programming for Vocational Education Planning. Interim Report.

    ERIC Educational Resources Information Center

    Young, Robert C.; And Others

    The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…

  1. Non-Linearity in Wide Dynamic Range CMOS Image Sensors Utilizing a Partial Charge Transfer Technique.

    PubMed

    Shafie, Suhaidi; Kawahito, Shoji; Halin, Izhal Abdul; Hasan, Wan Zuha Wan

    2009-01-01

    The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region.

  2. A review on prognostic techniques for non-stationary and non-linear rotating systems

    NASA Astrophysics Data System (ADS)

    Kan, Man Shan; Tan, Andy C. C.; Mathew, Joseph

    2015-10-01

    The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.

  3. Sparse 4D TomoSAR imaging in the presence of non-linear deformation

    NASA Astrophysics Data System (ADS)

    Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.

  4. Problem Based Learning Technique and Its Effect on Acquisition of Linear Programming Skills by Secondary School Students in Kenya

    ERIC Educational Resources Information Center

    Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice

    2015-01-01

    The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…

  5. Description of a computer program and numerical techniques for developing linear perturbation models from nonlinear systems simulations

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1978-01-01

    A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.

  6. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  7. Determination of Nonlinear Stiffness Coefficients for Finite Element Models with Application to the Random Vibration Problem

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.

    1999-01-01

    In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.

  8. Applications of Goal Programming to Education.

    ERIC Educational Resources Information Center

    Van Dusseldorp, Ralph A.; And Others

    This paper discusses goal programming, a computer-based operations research technique that is basically a modification and extension of linear programming. The authors first discuss the similarities and differences between goal programming and linear programming, then describe the limitations of goal programming and its possible applications for…

  9. A Comparison of Traditional Worksheet and Linear Programming Methods for Teaching Manure Application Planning.

    ERIC Educational Resources Information Center

    Schmitt, M. A.; And Others

    1994-01-01

    Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)

  10. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  11. A Flexible CUDA LU-based Solver for Small, Batched Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste

    This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less

  12. Accounting for large deformations in real-time simulations of soft tissues based on reduced-order models.

    PubMed

    Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F

    2012-01-01

    Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Resource allocation in shared spectrum access communications for operators with diverse service requirements

    NASA Astrophysics Data System (ADS)

    Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki

    2016-12-01

    In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.

  14. Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Tangpatiphan, Kritsana; Yokoyama, Akihiko

    This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.

  15. Analysis of periodically excited non-linear systems by a parametric continuation technique

    NASA Astrophysics Data System (ADS)

    Padmanabhan, C.; Singh, R.

    1995-07-01

    The dynamic behavior and frequency response of harmonically excited piecewise linear and/or non-linear systems has been the subject of several recent investigations. Most of the prior studies employed harmonic balance or Galerkin schemes, piecewise linear techniques, analog simulation and/or direct numerical integration (digital simulation). Such techniques are somewhat limited in their ability to predict all of the dynamic characteristics, including bifurcations leading to the occurrence of unstable, subharmonic, quasi-periodic and/or chaotic solutions. To overcome this problem, a parametric continuation scheme, based on the shooting method, is applied specifically to a periodically excited piecewise linear/non-linear system, in order to improve understanding as well as to obtain the complete dynamic response. Parameter regions exhibiting bifurcations to harmonic, subharmonic or quasi-periodic solutions are obtained quite efficiently and systematically. Unlike other techniques, the proposed scheme can follow period-doubling bifurcations, and with some modifications obtain stable quasi-periodic solutions and their bifurcations. This knowledge is essential in establishing conditions for the occurrence of chaotic oscillations in any non-linear system. The method is first validated through the Duffing oscillator example, the solutions to which are also obtained by conventional one-term harmonic balance and perturbation methods. The second example deals with a clearance non-linearity problem for both harmonic and periodic excitations. Predictions from the proposed scheme match well with available analog simulation data as well as with multi-term harmonic balance results. Potential savings in computational time over direct numerical integration is demonstrated for some of the example cases. Also, this work has filled in some of the solution regimes for an impact pair, which were missed previously in the literature. Finally, one main limitation associated with the proposed procedure is discussed.

  16. Determination of Tafel Constants in Nonlinear Polarization Curves.

    DTIC Science & Technology

    1987-12-01

    resulted in difficulty in determining the Tafel constants from such plots. A FORTRAN based program involving numerical differentiation techniques was...MASTER OF SCIENCE IN MECHANICAL ENGINEERING from the NAVAL POSTGRADUATE SCHOOL December 1987 Auho:Th as Edr L~oughlin Approved by: J erkins hesis Advisor...Inthony J.f Healey, Chai man, Departm o Mhnical E gineering ’ Gordon E. Schacher Dean of Science and Engineering 21 ABSTRACT The presence of non-linear

  17. PREFACE: The 6th International Symposium on Measurement Techniques for Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Okamoto, Koji; Murai, Yuichi

    2009-02-01

    Research on multi-phase flows is very important for industrial applications, including power stations, vehicles, engines, food processing, and so on. Also, from the environmental viewpoint, multi-phase flows need to be investigated to overcome global warming. Multi-phase flows originally have non-linear features because they are multi-phased. The interaction between the phases plays a very interesting role in the flows. The non-linear interaction causes the multi-phase flows to be very difficult to understand phenomena. The International Symposium on Measurement Techniques for Multi-phase Flows (ISMTMF) is a unique symposium. The target of the symposium is to exchange the state-of-the-art knowledge on the measurement techniques for non-linear multi-phase flows. Measurement technique is the key technology to understanding non-linear phenomena. The ISMTMF began in 1995 in Nanjing, China. The symposium has continuously been held every two or three years. The ISMTMF-2008 was held in Okinawa, Japan as the 6th symposium of ISMTMF on 15-17 December 2008. Okinawa has a long history as the Ryukyus Kingdom. China and Japan have had cultural and economic exchanges through Okinawa for more than 1000 years. Please enjoy Okinawa and experience its history to enhance our international communication. The present symposium was attended by 124 participants, the program included 107 contributions with 5 plenary lectures, 2 keynote lectures, and 100 oral regular paper presentations. The topics include, besides the ordinary measurement techniques for multiphase flows, acoustic and electric sensors, bubbles and microbubbles, computed tomography, gas-liquid interface, laser-imaging and PIV, oil/coal/drop and spray, solid and powder, spectral and multi-physics. This volume includes the presented papers at ISMTMF-2008. In addition to this volume, ten selected papers will be published in a special issue of Measurement Science and Technology. We would like to express special thanks to all the participants and the contributors to the symposium, and also to the supporting organizations; The Japanese Society for Multiphase Flow, The Chinese Society for Measurement, National Natural Science Foundation of China, The Chinese Academy of Science, and University of the Ryukyus, Okinawa, Japan. Koji Okamoto Chair of 6th ISMTMF and proceedings editor The University of Tokyo, Japan Yuichi Murai Proceedings co-editor Hokkaido University, Japan

  18. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  19. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  20. Enhanced algorithms for stochastic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishna, Alamuru S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less

  1. A FORTRAN technique for correlating a circular environmental variable with a linear physiological variable in the sugar maple.

    PubMed

    Pease, J M; Morselli, M F

    1987-01-01

    This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.

  2. Making a Difference in Science Education: The Impact of Undergraduate Research Programs

    PubMed Central

    Eagan, M. Kevin; Hurtado, Sylvia; Chang, Mitchell J.; Garcia, Gina A.; Herrera, Felisha A.; Garibay, Juan C.

    2014-01-01

    To increase the numbers of underrepresented racial minority students in science, technology, engineering, and mathematics (STEM), federal and private agencies have allocated significant funding to undergraduate research programs, which have been shown to students’ intentions of enrolling in graduate or professional school. Analyzing a longitudinal sample of 4,152 aspiring STEM majors who completed the 2004 Freshman Survey and 2008 College Senior Survey, this study utilizes multinomial hierarchical generalized linear modeling (HGLM) and propensity score matching techniques to examine how participation in undergraduate research affects STEM students’ intentions to enroll in STEM and non-STEM graduate and professional programs. Findings indicate that participation in an undergraduate research program significantly improved students’ probability of indicating plans to enroll in a STEM graduate program. PMID:25190821

  3. Non-linear homogenized and heterogeneous FE models for FRCM reinforced masonry walls in diagonal compression

    NASA Astrophysics Data System (ADS)

    Bertolesi, Elisa; Milani, Gabriele; Poggi, Carlo

    2016-12-01

    Two FE modeling techniques are presented and critically discussed for the non-linear analysis of tuff masonry panels reinforced with FRCM and subjected to standard diagonal compression tests. The specimens, tested at the University of Naples (Italy), are unreinforced and FRCM retrofitted walls. The extensive characterization of the constituent materials allowed adopting here very sophisticated numerical modeling techniques. In particular, here the results obtained by means of a micro-modeling strategy and homogenization approach are compared. The first modeling technique is a tridimensional heterogeneous micro-modeling where constituent materials (bricks, joints, reinforcing mortar and reinforcing grid) are modeled separately. The second approach is based on a two-step homogenization procedure, previously developed by the authors, where the elementary cell is discretized by means of three-noded plane stress elements and non-linear interfaces. The non-linear structural analyses are performed replacing the homogenized orthotropic continuum with a rigid element and non-linear spring assemblage (RBSM). All the simulations here presented are performed using the commercial software Abaqus. Pros and cons of the two approaches are herein discussed with reference to their reliability in reproducing global force-displacement curves and crack patterns, as well as to the rather different computational effort required by the two strategies.

  4. Timber management planning with timber ram and goal programming

    Treesearch

    Richard C. Field

    1978-01-01

    By using goal programming to enhance the linear programming of Timber RAM, multiple decision criteria were incorporated in the timber management planning of a National Forest in the southeastern United States. Combining linear and goal programming capitalizes on the advantages of the two techniques and produces operationally feasible solutions. This enhancement may...

  5. New exact solutions of the Tzitzéica-type equations in non-linear optics using the expa function method

    NASA Astrophysics Data System (ADS)

    Hosseini, K.; Ayati, Z.; Ansari, R.

    2018-04-01

    One specific class of non-linear evolution equations, known as the Tzitzéica-type equations, has received great attention from a group of researchers involved in non-linear science. In this article, new exact solutions of the Tzitzéica-type equations arising in non-linear optics, including the Tzitzéica, Dodd-Bullough-Mikhailov and Tzitzéica-Dodd-Bullough equations, are obtained using the expa function method. The integration technique actually suggests a useful and reliable method to extract new exact solutions of a wide range of non-linear evolution equations.

  6. Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.

    2018-03-01

    We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.

  7. End State: The Fallacy of Modern Military Planning

    DTIC Science & Technology

    2017-04-06

    operational planning for non -linear, complex scenarios requires application of non -linear, advanced planning techniques such as design methodology ...cannot be approached in a linear, mechanistic manner by a universal planning methodology . Theater/global campaign plans and theater strategies offer no...strategic environments, and instead prescribes a universal linear methodology that pays no mind to strategic complexity. This universal application

  8. An intuitionistic fuzzy multi-objective non-linear programming model for sustainable irrigation water allocation under the combination of dry and wet conditions

    NASA Astrophysics Data System (ADS)

    Li, Mo; Fu, Qiang; Singh, Vijay P.; Ma, Mingwei; Liu, Xiao

    2017-12-01

    Water scarcity causes conflicts among natural resources, society and economy and reinforces the need for optimal allocation of irrigation water resources in a sustainable way. Uncertainties caused by natural conditions and human activities make optimal allocation more complex. An intuitionistic fuzzy multi-objective non-linear programming (IFMONLP) model for irrigation water allocation under the combination of dry and wet conditions is developed to help decision makers mitigate water scarcity. The model is capable of quantitatively solving multiple problems including crop yield increase, blue water saving, and water supply cost reduction to obtain a balanced water allocation scheme using a multi-objective non-linear programming technique. Moreover, it can deal with uncertainty as well as hesitation based on the introduction of intuitionistic fuzzy numbers. Consideration of the combination of dry and wet conditions for water availability and precipitation makes it possible to gain insights into the various irrigation water allocations, and joint probabilities based on copula functions provide decision makers an average standard for irrigation. A case study on optimally allocating both surface water and groundwater to different growth periods of rice in different subareas in Heping irrigation area, Qing'an County, northeast China shows the potential and applicability of the developed model. Results show that the crop yield increase target especially in tillering and elongation stages is a prevailing concern when more water is available, and trading schemes can mitigate water supply cost and save water with an increased grain output. Results also reveal that the water allocation schemes are sensitive to the variation of water availability and precipitation with uncertain characteristics. The IFMONLP model is applicable for most irrigation areas with limited water supplies to determine irrigation water strategies under a fuzzy environment.

  9. Designing overall stoichiometric conversions and intervening metabolic reactions

    DOE PAGES

    Chowdhury, Anupam; Maranas, Costas D.

    2015-11-04

    Existing computational tools for de novo metabolic pathway assembly, either based on mixed integer linear programming techniques or graph-search applications, generally only find linear pathways connecting the source to the target metabolite. The overall stoichiometry of conversion along with alternate co-reactant (or co-product) combinations is not part of the pathway design. Therefore, global carbon and energy efficiency is in essence fixed with no opportunities to identify more efficient routes for recycling carbon flux closer to the thermodynamic limit. Here, we introduce a two-stage computational procedure that both identifies the optimum overall stoichiometry (i.e., optStoic) and selects for (non-)native reactions (i.e.,more » minRxn/minFlux) that maximize carbon, energy or price efficiency while satisfying thermodynamic feasibility requirements. Implementation for recent pathway design studies identified non-intuitive designs with improved efficiencies. Specifically, multiple alternatives for non-oxidative glycolysis are generated and non-intuitive ways of co-utilizing carbon dioxide with methanol are revealed for the production of C 2+ metabolites with higher carbon efficiency.« less

  10. A Unique Technique to get Kaprekar Iteration in Linear Programming Problem

    NASA Astrophysics Data System (ADS)

    Sumathi, P.; Preethy, V.

    2018-04-01

    This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.

  11. Evaluating forest management policies by parametric linear programing

    Treesearch

    Daniel I. Navon; Richard J. McConnen

    1967-01-01

    An analytical and simulation technique, parametric linear programing explores alternative conditions and devises an optimal management plan for each condition. Its application in solving policy-decision problems in the management of forest lands is illustrated in an example.

  12. Linear and Non-Linear Thermal Lens Signal of the Fifth C-H Vibrational Overtone of Naphthalene in Liquid Solutions of Hexane

    NASA Astrophysics Data System (ADS)

    Manzanares, Carlos; Diaz, Marlon; Barton, Ann; Nyaupane, Parashu R.

    2017-06-01

    The thermal lens technique is applied to vibrational overtone spectroscopy of solutions of naphthalene in n-hexane. The pump and probe thermal lens technique is found to be very sensitive for detecting samples of low composition (ppm) in transparent solvents. In this experiment two different probe lasers: one at 488 nm and another 568 nm were used. The C-H fifth vibrational overtone spectrum of benzene is detected at room temperature for different concentrations. A plot of normalized integrated intensity as a function of concentration of naphthalene in solution reveals a non-linear behavior at low concentrations when using the 488 nm probe and a linear behavior over the entire range of concentrations when using the 568 nm probe. The non-linearity cannot be explained assuming solvent enhancement at low concentrations. A two color absorption model that includes the simultaneous absorption of the pump and probe lasers could explain the enhanced magnitude and the non-linear behavior of the thermal lens signal. Other possible mechanisms will also be discussed.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaohu; Shi, Di; Wang, Zhiwei

    Shunt FACTS devices, such as, a Static Var Compensator (SVC), are capable of providing local reactive power compensation. They are widely used in the network to reduce the real power loss and improve the voltage profile. This paper proposes a planning model based on mixed integer conic programming (MICP) to optimally allocate SVCs in the transmission network considering load uncertainty. The load uncertainties are represented by a number of scenarios. Reformulation and linearization techniques are utilized to transform the original non-convex model into a convex second order cone programming (SOCP) model. Numerical case studies based on the IEEE 30-bus systemmore » demonstrate the effectiveness of the proposed planning model.« less

  14. Computer Program For Linear Algebra

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Hanson, R. J.

    1987-01-01

    Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.

  15. An efficient finite element technique for sound propagation in axisymmetric hard wall ducts carrying high subsonic Mach number flows

    NASA Technical Reports Server (NTRS)

    Tag, I. A.; Lumsdaine, E.

    1978-01-01

    The general non-linear three-dimensional equation for acoustic potential is derived by using a perturbation technique. The linearized axisymmetric equation is then solved by using a finite element algorithm based on the Galerkin formulation for a harmonic time dependence. The solution is carried out in complex number notation for the acoustic velocity potential. Linear, isoparametric, quadrilateral elements with non-uniform distribution across the duct section are implemented. The resultant global matrix is stored in banded form and solved by using a modified Gauss elimination technique. Sound pressure levels and acoustic velocities are calculated from post element solutions. Different duct geometries are analyzed and compared with experimental results.

  16. Construction of pore network models for Berea and Fontainebleau sandstones using non-linear programing and optimization techniques

    NASA Astrophysics Data System (ADS)

    Sharqawy, Mostafa H.

    2016-12-01

    Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.

  17. Advanced analysis technique for the evaluation of linear alternators and linear motors

    NASA Technical Reports Server (NTRS)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  18. A study of data analysis techniques for the multi-needle Langmuir probe

    NASA Astrophysics Data System (ADS)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.

    2018-06-01

    In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.

  19. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  20. A New Pattern of Getting Nasty Number in Graphical Method

    NASA Astrophysics Data System (ADS)

    Sumathi, P.; Indhumathi, N.

    2018-04-01

    This paper proposed a new technique of getting nasty numbers using graphical method in linear programming problem and it has been proved for various Linear programming problems. And also some characterisation of nasty numbers is discussed in this paper.

  1. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

  2. A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints

    NASA Astrophysics Data System (ADS)

    Estiningsih, Y.; Farikhin; Tjahjana, R. H.

    2018-03-01

    Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.

  3. Manual on the Flight of Flexible Aircraft in Turbulence (Manuel sur le Vol des Avions Non-rigides en Milieu Turbulent)

    DTIC Science & Technology

    1991-05-01

    Static Non-Linearity 106 0 y = f(dx/dt) = -f(-dx/dt) = = > Static Non-Linearity • y = f(x,sign(dx/dt)) = = > Hysteresis-Type Non-Linearity = -f(-x,sign... Havilland Division Garratt Blvd., Downsview Ontario M3K I Y5 Canada CONTENTS ABSTRACT NOTATION 1. INTRODUCTION 2. THE SDG GUST MODEL 3. ESTABLISHING CRITICAL...VENT ETRE ADRESSEES DIRECTEMENT N AU SERVICE NATIONAL TECHNIQUE, Dh INFORMATION (NTIS) DONT LADRESSE SUIT AGENCES DE VENTE National Technical

  4. A reducing of a chaotic movement to a periodic orbit, of a micro-electro-mechanical system, by using an optimal linear control design

    NASA Astrophysics Data System (ADS)

    Chavarette, Fábio Roberto; Balthazar, José Manoel; Felix, Jorge L. P.; Rafikov, Marat

    2009-05-01

    This paper analyzes the non-linear dynamics, with a chaotic behavior of a particular micro-electro-mechanical system. We used a technique of the optimal linear control for reducing the irregular (chaotic) oscillatory movement of the non-linear systems to a periodic orbit. We use the mathematical model of a (MEMS) proposed by Luo and Wang.

  5. Optical soliton solutions of the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term

    NASA Astrophysics Data System (ADS)

    Kaplan, Melike; Hosseini, Kamyar; Samadani, Farzan; Raza, Nauman

    2018-07-01

    A wide range of problems in different fields of the applied sciences especially non-linear optics is described by non-linear Schrödinger's equations (NLSEs). In the present paper, a specific type of NLSEs known as the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term has been studied. The generalized Kudryashov method along with symbolic computation package has been exerted to carry out this objective. As a consequence, a series of optical soliton solutions have formally been retrieved. It is corroborated that the generalized form of Kudryashov method is a direct, effectual, and reliable technique to deal with various types of non-linear Schrödinger's equations.

  6. BIODEGRADATION PROBABILITY PROGRAM (BIODEG)

    EPA Science Inventory

    The Biodegradation Probability Program (BIODEG) calculates the probability that a chemical under aerobic conditions with mixed cultures of microorganisms will biodegrade rapidly or slowly. It uses fragment constants developed using multiple linear and non-linear regressions and d...

  7. Gain optimization with non-linear controls

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Kandadai, R. D.

    1984-01-01

    An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.

  8. Intensity Modulation Techniques for Continuous-Wave Lidar for Column CO2 Measurements

    NASA Astrophysics Data System (ADS)

    Campbell, J. F.; Lin, B.; Obland, M. D.; Kooi, S. A.; Fan, T. F.; Meadows, B.; Browell, E. V.; Erxleben, W. H.; McGregor, D.; Dobler, J. T.; Pal, S.; O'Dell, C.

    2017-12-01

    Global and regional atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission and the Atmospheric Carbon and Transport (ACT) - America project are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space and airborne platforms to meet the ASCENDS and ACT-America science measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) and Linear Swept Frequency modulations to uniquely discriminate surface lidar returns from intermediate aerosol and cloud returns. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that take advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques and provides very high (at sub-meter level) range resolution. We compare BPSK to linear swept frequency and introduce a new technique to eliminate sidelobes in situations from linear swept frequency where the SNR is high with results that rival BPSK. We also investigate the effects of non-linear modulators, which can in some circumstances degrade the orthogonality of the waveforms, and show how to avoid this. These techniques are used in a new data processing architecture written in the C language to support the ASCENDS CarbonHawk Experiment Simulator (ACES) and ACT-America programs.

  9. Non-linear eigensolver-based alternative to traditional SCF methods

    NASA Astrophysics Data System (ADS)

    Gavin, Brendan; Polizzi, Eric

    2013-03-01

    The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.

  10. Synthesis of concentric circular antenna arrays using dragonfly algorithm

    NASA Astrophysics Data System (ADS)

    Babayigit, B.

    2018-05-01

    Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.

  11. Optimization of Dynamic Aperture of PEP-X Baseline Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Min-Huey; /SLAC; Cai, Yunhai

    2010-08-23

    SLAC is developing a long-range plan to transfer the evolving scientific programs at SSRL from the SPEAR3 light source to a much higher performing photon source. Storage ring design is one of the possibilities that would be housed in the 2.2-km PEP-II tunnel. The design goal of PEPX storage ring is to approach an optimal light source design with horizontal emittance less than 100 pm and vertical emittance of 8 pm to reach the diffraction limit of 1-{angstrom} x-ray. The low emittance design requires a lattice with strong focusing leading to high natural chromaticity and therefore to strong sextupoles. Themore » latter caused reduction of dynamic aperture. The dynamic aperture requirement for horizontal injection at injection point is about 10 mm. In order to achieve the desired dynamic aperture the transverse non-linearity of PEP-X is studied. The program LEGO is used to simulate the particle motion. The technique of frequency map is used to analyze the nonlinear behavior. The effect of the non-linearity is tried to minimize at the given constrains of limited space. The details and results of dynamic aperture optimization are discussed in this paper.« less

  12. Trends in non-stationary signal processing techniques applied to vibration analysis of wind turbine drive train - A contemporary survey

    NASA Astrophysics Data System (ADS)

    Uma Maheswari, R.; Umamaheswari, R.

    2017-02-01

    Condition Monitoring System (CMS) substantiates potential economic benefits and enables prognostic maintenance in wind turbine-generator failure prevention. Vibration Monitoring and Analysis is a powerful tool in drive train CMS, which enables the early detection of impending failure/damage. In variable speed drives such as wind turbine-generator drive trains, the vibration signal acquired is of non-stationary and non-linear. The traditional stationary signal processing techniques are inefficient to diagnose the machine faults in time varying conditions. The current research trend in CMS for drive-train focuses on developing/improving non-linear, non-stationary feature extraction and fault classification algorithms to improve fault detection/prediction sensitivity and selectivity and thereby reducing the misdetection and false alarm rates. In literature, review of stationary signal processing algorithms employed in vibration analysis is done at great extent. In this paper, an attempt is made to review the recent research advances in non-linear non-stationary signal processing algorithms particularly suited for variable speed wind turbines.

  13. Assessing non-uniqueness: An algebraic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, Don W.

    Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.

  14. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  15. An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems

    NASA Astrophysics Data System (ADS)

    Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri

    2018-01-01

    The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.

  16. A predictive modeling approach to increasing the economic effectiveness of disease management programs.

    PubMed

    Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian

    2014-09-01

    Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.

  17. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  18. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    PubMed

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  19. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  20. Evolution of diffraction and self-diffraction phenomena in thin films of Gelite Bloom/Hibiscus Sabdariffa

    NASA Astrophysics Data System (ADS)

    Cano-Lara, Miroslava; Severiano-Carrillo, Israel; Trejo-Durán, Mónica; Alvarado-Méndez, Edgar

    2017-09-01

    In this work, we present a study of non-linear optical response in thin films elaborated with Gelite Bloom and extract of Hibiscus Sabdariffa. Non-linear refraction and absorption effects were studied experimentally (Z-scan technique) and numerically, by considering the transmittance as non-linear absorption and refraction contribution. We observe large phase shifts to far field, and diffraction due to self-phase modulation of the sample. Diffraction and self-diffraction effects were observed as time function. The aim of studying non-linear optical properties in thin films is to eliminate thermal vortex effects that occur in liquids. This is desirable in applications such as non-linear phase contrast, optical limiting, optics switches, etc. Finally, we find good agreement between experimental and theoretical results.

  1. A program for calculating photonic band structures, Green's functions and transmission/reflection coefficients using a non-orthogonal FDTD method

    NASA Astrophysics Data System (ADS)

    Ward, A. J.; Pendry, J. B.

    2000-06-01

    In this paper we present an updated version of our ONYX program for calculating photonic band structures using a non-orthogonal finite difference time domain method. This new version employs the same transparent formalism as the first version with the same capabilities for calculating photonic band structures or causal Green's functions but also includes extra subroutines for the calculation of transmission and reflection coefficients. Both the electric and magnetic fields are placed onto a discrete lattice by approximating the spacial and temporal derivatives with finite differences. This results in discrete versions of Maxwell's equations which can be used to integrate the fields forwards in time. The time required for a calculation using this method scales linearly with the number of real space points used in the discretization so the technique is ideally suited to handling systems with large and complicated unit cells.

  2. Integer Linear Programming in Computational Biology

    NASA Astrophysics Data System (ADS)

    Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut

    Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.

  3. Comparison of acrylamide intake from Western and guideline based diets using probabilistic techniques and linear programming.

    PubMed

    Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G

    2012-03-01

    Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (P<0.001) from consumption of the guideline based diets (0.508±0.003 μg/kg/day) than from consumption of the Western diets (0.441±0.003 μg/kg/day). Guideline based diets contained less acrylamide contributed by French fries and potato chips than Western diets. Overall acrylamide intake, however, was higher in guideline based diets as a result of more frequent breakfast cereal intake. This is believed to be the first example of a risk assessment that combines probabilistic techniques with linear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. LINEAR AND NONLINEAR CORRECTIONS IN THE RHIC INTERACTION REGIONS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PILAT,F.; CAMERON,P.; PTITSYN,V.

    2002-06-02

    A method has been developed to measure operationally the linear and non-linear effects of the interaction region triplets, that gives access to the multipole content through the action kick, by applying closed orbit bumps and analysing tune and orbit shifts. This technique has been extensively tested and used during the RHIC operations in 2001. Measurements were taken at 3 different interaction regions and for different focusing at the interaction point. Non-linear effects up to the dodecapole have been measured as well as the effects of linear, sextupolar and octupolar corrections. An analysis package for the data processing has been developedmore » that through a precise fit of the experimental tune shift data (measured by a phase lock loop technique to better than 10{sup -5} resolution) determines the multipole content of an IR triplet.« less

  5. Improvements in aircraft extraction programs

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.; Maine, R. E.

    1976-01-01

    Flight data from an F-8 Corsair and a Cessna 172 was analyzed to demonstrate specific improvements in the LRC parameter extraction computer program. The Cramer-Rao bounds were shown to provide a satisfactory relative measure of goodness of parameter estimates. It was not used as an absolute measure due to an inherent uncertainty within a multiplicative factor, traced in turn to the uncertainty in the noise bandwidth in the statistical theory of parameter estimation. The measure was also derived on an entirely nonstatistical basis, yielding thereby also an interpretation of the significance of off-diagonal terms in the dispersion matrix. The distinction between coefficients as linear and non-linear was shown to be important in its implication to a recommended order of parameter iteration. Techniques of improving convergence generally, were developed, and tested out on flight data. In particular, an easily implemented modification incorporating a gradient search was shown to improve initial estimates and thus remove a common cause for lack of convergence.

  6. Signal Detection Techniques for Diagnostic Monitoring of Space Shuttle Main Engine Turbomachinery

    NASA Technical Reports Server (NTRS)

    Coffin, Thomas; Jong, Jen-Yi

    1986-01-01

    An investigation to develop, implement, and evaluate signal analysis techniques for the detection and classification of incipient mechanical failures in turbomachinery is reviewed. A brief description of the Space Shuttle Main Engine (SSME) test/measurement program is presented. Signal analysis techniques available to describe dynamic measurement characteristics are reviewed. Time domain and spectral methods are described, and statistical classification in terms of moments is discussed. Several of these waveform analysis techniques have been implemented on a computer and applied to dynamc signals. A laboratory evaluation of the methods with respect to signal detection capability is described. A unique coherence function (the hyper-coherence) was developed through the course of this investigation, which appears promising as a diagnostic tool. This technique and several other non-linear methods of signal analysis are presented and illustrated by application. Software for application of these techniques has been installed on the signal processing system at the NASA/MSFC Systems Dynamics Laboratory.

  7. On the Feasibility of a Generalized Linear Program

    DTIC Science & Technology

    1989-03-01

    generealized linear program by applying the same algorithm to a "phase-one" problem without requiring that the initial basic feasible solution to the latter be non-degenerate. secUrMTY C.AMlIS CAYI S OP ?- PAeES( UII -W & ,

  8. PubMed

    Trinker, Horst

    2011-10-28

    We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.

  9. SUBOPT: A CAD program for suboptimal linear regulators

    NASA Technical Reports Server (NTRS)

    Fleming, P. J.

    1985-01-01

    An interactive software package which provides design solutions for both standard linear quadratic regulator (LQR) and suboptimal linear regulator problems is described. Intended for time-invariant continuous systems, the package is easily modified to include sampled-data systems. LQR designs are obtained by established techniques while the large class of suboptimal problems containing controller and/or performance index options is solved using a robust gradient minimization technique. Numerical examples demonstrate features of the package and recent developments are described.

  10. Crystal growth, piezoelectric, non-linear optical and mechanical properties of lithium hydrogen oxalate monohydrate single crystal

    NASA Astrophysics Data System (ADS)

    Chandran, Senthilkumar; Paulraj, Rajesh; Ramasamy, P.

    2017-05-01

    Semi-organic lithium hydrogen oxalate monohydrate non-linear optical single crystals have been grown by slow evaporation solution growth technique at 35 °C. Single crystal X-ray diffraction study showed that the grown crystal belongs to the triclinic system with space group P1. The mechanical strength decreases with increasing load. The piezoelectric coefficient is found to be 1.41 pC/N. The nonlinear optical property was measured using Kurtz Perry powder technique and SHG efficiency was almost equal to that of KDP.

  11. A fully non-linear multi-species Fokker–Planck–Landau collision operator for simulation of fusion plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov

    2016-06-15

    Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less

  12. A fully non-linear multi-species Fokker–Planck–Landau collision operator for simulation of fusion plasma

    DOE PAGES

    Hager, Robert; Yoon, E. S.; Ku, S.; ...

    2016-04-04

    Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less

  13. Non-Linear Dynamics and Emergence in Laboratory Fusion Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hnat, B.

    2011-09-22

    Turbulent behaviour of laboratory fusion plasma system is modelled using extended Hasegawa-Wakatani equations. The model is solved numerically using finite difference techniques. We discuss non-linear effects in such a system in the presence of the micro-instabilities, specifically a drift wave instability. We explore particle dynamics in different range of parameters and show that the transport changes from diffusive to non-diffusive when large directional flows are developed.

  14. IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1994-01-01

    IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.

  15. Computation of non-monotonic Lyapunov functions for continuous-time systems

    NASA Astrophysics Data System (ADS)

    Li, Huijuan; Liu, AnPing

    2017-09-01

    In this paper, we propose two methods to compute non-monotonic Lyapunov functions for continuous-time systems which are asymptotically stable. The first method is to solve a linear optimization problem on a compact and bounded set. The proposed linear programming based algorithm delivers a CPA1

  16. Finite element modelling of non-linear magnetic circuits using Cosmic NASTRAN

    NASA Technical Reports Server (NTRS)

    Sheerer, T. J.

    1986-01-01

    The general purpose Finite Element Program COSMIC NASTRAN currently has the ability to model magnetic circuits with constant permeablilities. An approach was developed which, through small modifications to the program, allows modelling of non-linear magnetic devices including soft magnetic materials, permanent magnets and coils. Use of the NASTRAN code resulted in output which can be used for subsequent mechanical analysis using a variation of the same computer model. Test problems were found to produce theoretically verifiable results.

  17. Non-linear Growth Models in Mplus and SAS

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  18. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  19. Beyond endoscopic assessment in inflammatory bowel disease: real-time histology of disease activity by non-linear multimodal imaging

    NASA Astrophysics Data System (ADS)

    Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen

    2016-07-01

    Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.

  20. A new analysis of the Fornberg-Whitham equation pertaining to a fractional derivative with Mittag-Leffler-type kernel

    NASA Astrophysics Data System (ADS)

    Kumar, Devendra; Singh, Jagdev; Baleanu, Dumitru

    2018-02-01

    The mathematical model of breaking of non-linear dispersive water waves with memory effect is very important in mathematical physics. In the present article, we examine a novel fractional extension of the non-linear Fornberg-Whitham equation occurring in wave breaking. We consider the most recent theory of differentiation involving the non-singular kernel based on the extended Mittag-Leffler-type function to modify the Fornberg-Whitham equation. We examine the existence of the solution of the non-linear Fornberg-Whitham equation of fractional order. Further, we show the uniqueness of the solution. We obtain the numerical solution of the new arbitrary order model of the non-linear Fornberg-Whitham equation with the aid of the Laplace decomposition technique. The numerical outcomes are displayed in the form of graphs and tables. The results indicate that the Laplace decomposition algorithm is a very user-friendly and reliable scheme for handling such type of non-linear problems of fractional order.

  1. Modelling formulations using gene expression programming--a comparative analysis with artificial neural networks.

    PubMed

    Colbourn, E A; Roskilly, S J; Rowe, R C; York, P

    2011-10-09

    This study has investigated the utility and potential advantages of gene expression programming (GEP)--a new development in evolutionary computing for modelling data and automatically generating equations that describe the cause-and-effect relationships in a system--to four types of pharmaceutical formulation and compared the models with those generated by neural networks, a technique now widely used in the formulation development. Both methods were capable of discovering subtle and non-linear relationships within the data, with no requirement from the user to specify the functional forms that should be used. Although the neural networks rapidly developed models with higher values for the ANOVA R(2) these were black box and provided little insight into the key relationships. However, GEP, although significantly slower at developing models, generated relatively simple equations describing the relationships that could be interpreted directly. The results indicate that GEP can be considered an effective and efficient modelling technique for formulation data. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Solving Fractional Programming Problems based on Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Raouf, Osama Abdel; Hezam, Ibrahim M.

    2014-04-01

    This paper presents a new approach to solve Fractional Programming Problems (FPPs) based on two different Swarm Intelligence (SI) algorithms. The two algorithms are: Particle Swarm Optimization, and Firefly Algorithm. The two algorithms are tested using several FPP benchmark examples and two selected industrial applications. The test aims to prove the capability of the SI algorithms to solve any type of FPPs. The solution results employing the SI algorithms are compared with a number of exact and metaheuristic solution methods used for handling FPPs. Swarm Intelligence can be denoted as an effective technique for solving linear or nonlinear, non-differentiable fractional objective functions. Problems with an optimal solution at a finite point and an unbounded constraint set, can be solved using the proposed approach. Numerical examples are given to show the feasibility, effectiveness, and robustness of the proposed algorithm. The results obtained using the two SI algorithms revealed the superiority of the proposed technique among others in computational time. A better accuracy was remarkably observed in the solution results of the industrial application problems.

  3. Output Tracking for Systems with Non-Hyperbolic and Near Non-Hyperbolic Internal Dynamics: Helicopter Hover Control

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics is presented. This approach integrates stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics is used (1) to remove non-hyperbolicity which an obstruction to applying stable inversion techniques and (2) to reduce large pre-actuation time needed to apply stable inversion for near non-hyperbolic cases. The method is applied to an example helicopter hover control problem with near non-hyperbolic internal dynamic for illustrating the trade-off between exact tracking and reduction of pre-actuation time.

  4. Quantum state engineering of light with continuous-wave optical parametric oscillators.

    PubMed

    Morin, Olivier; Liu, Jianli; Huang, Kun; Barbosa, Felippe; Fabre, Claude; Laurat, Julien

    2014-05-30

    Engineering non-classical states of the electromagnetic field is a central quest for quantum optics(1,2). Beyond their fundamental significance, such states are indeed the resources for implementing various protocols, ranging from enhanced metrology to quantum communication and computing. A variety of devices can be used to generate non-classical states, such as single emitters, light-matter interfaces or non-linear systems(3). We focus here on the use of a continuous-wave optical parametric oscillator(3,4). This system is based on a non-linear χ(2) crystal inserted inside an optical cavity and it is now well-known as a very efficient source of non-classical light, such as single-mode or two-mode squeezed vacuum depending on the crystal phase matching. Squeezed vacuum is a Gaussian state as its quadrature distributions follow a Gaussian statistics. However, it has been shown that number of protocols require non-Gaussian states(5). Generating directly such states is a difficult task and would require strong χ(3) non-linearities. Another procedure, probabilistic but heralded, consists in using a measurement-induced non-linearity via a conditional preparation technique operated on Gaussian states. Here, we detail this generation protocol for two non-Gaussian states, the single-photon state and a superposition of coherent states, using two differently phase-matched parametric oscillators as primary resources. This technique enables achievement of a high fidelity with the targeted state and generation of the state in a well-controlled spatiotemporal mode.

  5. Novel hybrid linear stochastic with non-linear extreme learning machine methods for forecasting monthly rainfall a tropical climate.

    PubMed

    Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein

    2018-09-15

    A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2  = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Automated design and optimization of flexible booster autopilots via linear programming, volume 1

    NASA Technical Reports Server (NTRS)

    Hauser, F. D.

    1972-01-01

    A nonlinear programming technique was developed for the automated design and optimization of autopilots for large flexible launch vehicles. This technique, which resulted in the COEBRA program, uses the iterative application of linear programming. The method deals directly with the three main requirements of booster autopilot design: to provide (1) good response to guidance commands; (2) response to external disturbances (e.g. wind) to minimize structural bending moment loads and trajectory dispersions; and (3) stability with specified tolerances on the vehicle and flight control system parameters. The method is applicable to very high order systems (30th and greater per flight condition). Examples are provided that demonstrate the successful application of the employed algorithm to the design of autopilots for both single and multiple flight conditions.

  7. Monotonic non-linear transformations as a tool to investigate age-related effects on brain white matter integrity: A Box-Cox investigation.

    PubMed

    Morozova, Maria; Koschutnig, Karl; Klein, Elise; Wood, Guilherme

    2016-01-15

    Non-linear effects of age on white matter integrity are ubiquitous in the brain and indicate that these effects are more pronounced in certain brain regions at specific ages. Box-Cox analysis is a technique to increase the log-likelihood of linear relationships between variables by means of monotonic non-linear transformations. Here we employ Box-Cox transformations to flexibly and parsimoniously determine the degree of non-linearity of age-related effects on white matter integrity by means of model comparisons using a voxel-wise approach. Analysis of white matter integrity in a sample of adults between 20 and 89years of age (n=88) revealed that considerable portions of the white matter in the corpus callosum, cerebellum, pallidum, brainstem, superior occipito-frontal fascicle and optic radiation show non-linear effects of age. Global analyses revealed an increase in the average non-linearity from fractional anisotropy to radial diffusivity, axial diffusivity, and mean diffusivity. These results suggest that Box-Cox transformations are a useful and flexible tool to investigate more complex non-linear effects of age on white matter integrity and extend the functionality of the Box-Cox analysis in neuroimaging. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Linear and nonlinear interpretation of the direct strike lightning response of the NASA F106B thunderstorm research aircraft

    NASA Technical Reports Server (NTRS)

    Rudolph, T. H.; Perala, R. A.

    1983-01-01

    The objective of the work reported here is to develop a methodology by which electromagnetic measurements of inflight lightning strike data can be understood and extended to other aircraft. A linear and time invariant approach based on a combination of Fourier transform and three dimensional finite difference techniques is demonstrated. This approach can obtain the lightning channel current in the absence of the aircraft for given channel characteristic impedance and resistive loading. The model is applied to several measurements from the NASA F106B lightning research program. A non-linear three dimensional finite difference code has also been developed to study the response of the F106B to a lightning leader attachment. This model includes three species air chemistry and fluid continuity equations and can incorporate an experimentally based streamer formulation. Calculated responses are presented for various attachment locations and leader parameters. The results are compared qualitatively with measured inflight data.

  9. A Study on Linear Programming Applications for the Optimization of School Lunch Menus. Summation Report.

    ERIC Educational Resources Information Center

    Findorff, Irene K.

    This document summarizes the results of a project at Tulane University that was designed to adapt, test, and evaluate a computerized information and menu planning system utilizing linear programing techniques for use in school lunch food service operations. The objectives of the menu planning were to formulate menu items into a palatable,…

  10. Analysis on nonlinear optical properties of Cd (Zn) Se quantum dots synthesized using three different stabilizing agents

    NASA Astrophysics Data System (ADS)

    J, Joy Sebastian Prakash; G, Vinitha; Ramachandran, Murugesan; Rajamanickam, Karunanithi

    2017-10-01

    Three different stabilizing agents, namely, L-cysteine, Thioglycolic acid and cysteamine hydrochloride were used to synthesize Cd(Zn)Se quantum dots (QDs). It was characterized using UV-vis spectroscopy, x-ray diffraction (XRD) and transmission electron microscopy (TEM). The non-linear optical properties (non-linear absorption and non-linear refraction) of synthesized Cd(Zn)Se quantum dots were studied with z-scan technique using diode pumped continuous wavelaser system at a wavelength of 532 nm. Our (organic) synthesized quantum dots showed optical properties similar to the inorganic materials reported elsewhere.

  11. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  12. Estimating monotonic rates from biological data using local linear regression.

    PubMed

    Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R

    2017-03-01

    Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.

  13. A linear circuit analysis program with stiff systems capability

    NASA Technical Reports Server (NTRS)

    Cook, C. H.; Bavuso, S. J.

    1973-01-01

    Several existing network analysis programs have been modified and combined to employ a variable topological approach to circuit translation. Efficient numerical integration techniques are used for transient analysis.

  14. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  15. Understanding a Normal Distribution of Data (Part 2).

    PubMed

    Maltenfort, Mitchell

    2016-02-01

    Completing the discussion of data normality, advanced techniques for analysis of non-normal data are discussed including data transformation, Generalized Linear Modeling, and bootstrapping. Relative strengths and weaknesses of each technique are helpful in choosing a strategy, but help from a statistician is usually necessary to analyze non-normal data using these methods.

  16. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.

  17. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  18. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  19. Exact and heuristic algorithms for Space Information Flow.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng

    2018-01-01

    Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.

  20. Inverse constraints for emission fluxes of atmospheric tracers estimated from concentration measurements and Lagrangian transport

    NASA Astrophysics Data System (ADS)

    Pisso, Ignacio; Patra, Prabir; Breivik, Knut

    2015-04-01

    Lagrangian transport models based on times series of Eulerian fields provide a computationally affordable way of achieving very high resolution for limited areas and time periods. This makes them especially suitable for the analysis of point-wise measurements of atmospheric tracers. We present an application illustrated with examples of greenhouse gases from anthropogenic emissions in urban areas and biogenic emissions in Japan and of pollutants in the Arctic. We asses the algorithmic complexity of the numerical implementation as well as the use of non-procedural techniques such as Object-Oriented programming. We discuss aspects related to the quantification of uncertainty from prior information in the presence of model error and limited number of observations. The case of non-linear constraints is explored using direct numerical optimisation methods.

  1. An Interactive Method to Solve Infeasibility in Linear Programming Test Assembling Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    In optimal assembly of tests from item banks, linear programming (LP) models have proved to be very useful. Assembly by hand has become nearly impossible, but these LP techniques are able to find the best solutions, given the demands and needs of the test to be assembled and the specifics of the item bank from which it is assembled. However,…

  2. Linear programming computational experience with onyx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atrek, E.

    1994-12-31

    ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.

  3. Postprocessing techniques for 3D non-linear structures

    NASA Technical Reports Server (NTRS)

    Gallagher, Richard S.

    1987-01-01

    How graphics postprocessing techniques are currently used to examine the results of 3-D nonlinear analyses, some new techniques which take advantage of recent technology, and how these results relate to both the finite element model and its geometric parent are reviewed.

  4. Runtime Analysis of Linear Temporal Logic Specifications

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus

    2001-01-01

    This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  5. PAPR reduction in FBMC using an ACE-based linear programming optimization

    NASA Astrophysics Data System (ADS)

    van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan

    2014-12-01

    This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.

  6. Accuracy and efficiency of published film dosimetry techniques using a flat-bed scanner and EBT3 film.

    PubMed

    Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T

    2018-03-01

    Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.

  7. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  8. Interactive Graphics Analysis for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.

    1983-01-01

    Program uses higher-order far field drag minimization. Computer program WDES WDEM preliminary aerodynamic design tool for one or two interacting, subsonic lifting surfaces. Subcritical wing design code employs higher-order far-field drag minimization technique. Linearized aerodynamic theory used. Program written in FORTRAN IV.

  9. Prediction of atmospheric degradation data for POPs by gene expression programming.

    PubMed

    Luan, F; Si, H Z; Liu, H T; Wen, Y Y; Zhang, X Y

    2008-01-01

    Quantitative structure-activity relationship models for the prediction of the mean and the maximum atmospheric degradation half-life values of persistent organic pollutants were developed based on the linear heuristic method (HM) and non-linear gene expression programming (GEP). Molecular descriptors, calculated from the structures alone, were used to represent the characteristics of the compounds. HM was used both to pre-select the whole descriptor sets and to build the linear model. GEP yielded satisfactory prediction results: the square of the correlation coefficient r(2) was 0.80 and 0.81 for the mean and maximum half-life values of the test set, and the root mean square errors were 0.448 and 0.426, respectively. The results of this work indicate that the GEP is a very promising tool for non-linear approximations.

  10. Linear combination reading program for capture gamma rays

    USGS Publications Warehouse

    Tanner, Allan B.

    1971-01-01

    This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).

  11. Detection and description of non-linear interdependence in normal multichannel human EEG data.

    PubMed

    Breakspear, M; Terry, J R

    2002-05-01

    This study examines human scalp electroencephalographic (EEG) data for evidence of non-linear interdependence between posterior channels. The spectral and phase properties of those epochs of EEG exhibiting non-linear interdependence are studied. Scalp EEG data was collected from 40 healthy subjects. A technique for the detection of non-linear interdependence was applied to 2.048 s segments of posterior bipolar electrode data. Amplitude-adjusted phase-randomized surrogate data was used to statistically determine which EEG epochs exhibited non-linear interdependence. Statistically significant evidence of non-linear interactions were evident in 2.9% (eyes open) to 4.8% (eyes closed) of the epochs. In the eyes-open recordings, these epochs exhibited a peak in the spectral and cross-spectral density functions at about 10 Hz. Two types of EEG epochs are evident in the eyes-closed recordings; one type exhibits a peak in the spectral density and cross-spectrum at 8 Hz. The other type has increased spectral and cross-spectral power across faster frequencies. Epochs identified as exhibiting non-linear interdependence display a tendency towards phase interdependencies across and between a broad range of frequencies. Non-linear interdependence is detectable in a small number of multichannel EEG epochs, and makes a contribution to the alpha rhythm. Non-linear interdependence produces spatially distributed activity that exhibits phase synchronization between oscillations present at different frequencies. The possible physiological significance of these findings are discussed with reference to the dynamical properties of neural systems and the role of synchronous activity in the neocortex.

  12. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  13. Evaluation and comparison of the ability of online available prediction programs to predict true linear B-cell epitopes.

    PubMed

    Costa, Juan G; Faccendini, Pablo L; Sferco, Silvano J; Lagier, Claudia M; Marcipar, Iván S

    2013-06-01

    This work deals with the use of predictors to identify useful B-cell linear epitopes to develop immunoassays. Experimental techniques to meet this goal are quite expensive and time consuming. Therefore, we tested 5 free, online prediction methods (AAPPred, ABCpred, BcePred, BepiPred and Antigenic) widely used for predicting linear epitopes, using the primary structure of the protein as the only input. We chose a set of 65 experimentally well documented epitopes obtained by the most reliable experimental techniques as our true positive set. To compare the quality of the predictor methods we used their positive predictive value (PPV), i.e. the proportion of the predicted epitopes that are true, experimentally confirmed epitopes, in relation to all the epitopes predicted. We conclude that AAPPred and ABCpred yield the best results as compared with the other programs and with a random prediction procedure. Our results also indicate that considering the consensual epitopes predicted by several programs does not improve the PPV.

  14. The Linear Programming to evaluate the performance of Oral Health in Primary Care.

    PubMed

    Colussi, Claudia Flemming; Calvo, Maria Cristina Marino; Freitas, Sergio Fernando Torres de

    2013-01-01

    To show the use of Linear Programming to evaluate the performance of Oral Health in Primary Care. This study used data from 19 municipalities of Santa Catarina city that participated of the state evaluation in 2009 and have more than 50,000 habitants. A total of 40 indicators were evaluated, calculated using the Microsoft Excel 2007, and converted to the interval [0, 1] in ascending order (one indicating the best situation and zero indicating the worst situation). Applying the Linear Programming technique municipalities were assessed and compared among them according to performance curve named "quality estimated frontier". Municipalities included in the frontier were classified as excellent. Indicators were gathered, and became synthetic indicators. The majority of municipalities not included in the quality frontier (values different of 1.0) had lower values than 0.5, indicating poor performance. The model applied to the municipalities of Santa Catarina city assessed municipal management and local priorities rather than the goals imposed by pre-defined parameters. In the final analysis three municipalities were included in the "perceived quality frontier". The Linear Programming technique allowed to identify gaps that must be addressed by city managers to enhance actions taken. It also enabled to observe each municipal performance and compare results among similar municipalities.

  15. Chemical Equation Balancing.

    ERIC Educational Resources Information Center

    Blakley, G. R.

    1982-01-01

    Reviews mathematical techniques for solving systems of homogeneous linear equations and demonstrates that the algebraic method of balancing chemical equations is a matter of solving a system of homogeneous linear equations. FORTRAN programs using this matrix method to chemical equation balancing are available from the author. (JN)

  16. Optimization model of vaccination strategy for dengue transmission

    NASA Astrophysics Data System (ADS)

    Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.

    2014-02-01

    Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.

  17. Non-linear analysis of wave progagation using transform methods and plates and shells using integral equations

    NASA Astrophysics Data System (ADS)

    Pipkins, Daniel Scott

    Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially.

  18. A Whirlwind Tour of Computational Geometry.

    ERIC Educational Resources Information Center

    Graham, Ron; Yao, Frances

    1990-01-01

    Described is computational geometry which used concepts and results from classical geometry, topology, combinatorics, as well as standard algorithmic techniques such as sorting and searching, graph manipulations, and linear programing. Also included are special techniques and paradigms. (KR)

  19. Non-Lethal Weapons Program

    Science.gov Websites

    ), 26th Marine Expeditionary Unit (MEU), practice non-lethal control techniques during a non-lethal Skip to main content (Press Enter). Toggle navigation Non-Lethal Weapons Program Search Search JNLWP: Search Search JNLWP: Search Non-Lethal Weapons Program U.S. Department of Defense Non-Lethal

  20. Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.

    2008-12-01

    To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.

  1. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  2. DYGABCD: A program for calculating linear A, B, C, and D matrices from a nonlinear dynamic engine simulation

    NASA Technical Reports Server (NTRS)

    Geyser, L. C.

    1978-01-01

    A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.

  3. Commande de vol non lineaire d'un drone a voilure fixe par la methode du backstepping

    NASA Astrophysics Data System (ADS)

    Finoki, Edouard

    This thesis describes the design of a non-linear controller for a UAV using the backstepping method. It is a fixed-wing UAV, the NexSTAR ARF from HobbicoRTM. The aim is to find the expressions of the aileron, the elevator, and the rudder deflection in order to command the flight path angle, the heading angle and the sideslip angle. Controlling the flight path angle allows a steady, climb or descent flight, controlling the heading cap allows to choose the heading and annul the sideslip angle allows an efficient flight. A good technical control has to ensure the stability of the system and provide optimal performances. Backstepping interlaces the choice of a Lyapunov function with the design of feedback control. This control technique works with the true non-linear model without any approximation. The procedure is to transform intermediate state variables into virtual inputs which will control other state variables. Advantages of this technique are its recursivity, its minimum control effort and its cascaded structure that allows dividing a high order system into several simpler lower order systems. To design this non-linear controller, a non-linear model of the UAV was used. Equations of motion are very accurate, aerodynamic coefficients result from interpolations between several essential variables in flight. The controller has been implemented in Matlab/Simulink and FlightGear.

  4. Remote detection of electronic devices

    DOEpatents

    Judd, Stephen L [Los Alamos, NM; Fortgang, Clifford M [Los Alamos, NM; Guenther, David C [Los Alamos, NM

    2012-09-25

    An apparatus and method for detecting solid-state electronic devices are described. Non-linear junction detection techniques are combined with spread-spectrum encoding and cross correlation to increase the range and sensitivity of the non-linear junction detection and to permit the determination of the distances of the detected electronics. Nonlinear elements are detected by transmitting a signal at a chosen frequency and detecting higher harmonic signals that are returned from responding devices.

  5. Optimal non-linear health insurance.

    PubMed

    Blomqvist, A

    1997-06-01

    Most theoretical and empirical work on efficient health insurance has been based on models with linear insurance schedules (a constant co-insurance parameter). In this paper, dynamic optimization techniques are used to analyse the properties of optimal non-linear insurance schedules in a model similar to one originally considered by Spence and Zeckhauser (American Economic Review, 1971, 61, 380-387) and reminiscent of those that have been used in the literature on optimal income taxation. The results of a preliminary numerical example suggest that the welfare losses from the implicit subsidy to employer-financed health insurance under US tax law may be a good deal smaller than previously estimated using linear models.

  6. Comparative data mining analysis for information retrieval of MODIS images: monitoring lake turbidity changes at Lake Okeechobee, Florida

    NASA Astrophysics Data System (ADS)

    Chang, Ni-Bin; Daranpob, Ammarin; Yang, Y. Jeffrey; Jin, Kang-Ren

    2009-09-01

    In the remote sensing field, a frequently recurring question is: Which computational intelligence or data mining algorithms are most suitable for the retrieval of essential information given that most natural systems exhibit very high non-linearity. Among potential candidates might be empirical regression, neural network model, support vector machine, genetic algorithm/genetic programming, analytical equation, etc. This paper compares three types of data mining techniques, including multiple non-linear regression, artificial neural networks, and genetic programming, for estimating multi-temporal turbidity changes following hurricane events at Lake Okeechobee, Florida. This retrospective analysis aims to identify how the major hurricanes impacted the water quality management in 2003-2004. The Moderate Resolution Imaging Spectroradiometer (MODIS) Terra 8-day composite imageries were used to retrieve the spatial patterns of turbidity distributions for comparison against the visual patterns discernible in the in-situ observations. By evaluating four statistical parameters, the genetic programming model was finally selected as the most suitable data mining tool for classification in which the MODIS band 1 image and wind speed were recognized as the major determinants by the model. The multi-temporal turbidity maps generated before and after the major hurricane events in 2003-2004 showed that turbidity levels were substantially higher after hurricane episodes. The spatial patterns of turbidity confirm that sediment-laden water travels to the shore where it reduces the intensity of the light necessary to submerged plants for photosynthesis. This reduction results in substantial loss of biomass during the post-hurricane period.

  7. Caracteristicas de la Instruccion Programada como Tecnica de Ensenanza (Characteristics of Programed Instruction as a Teaching Technique).

    ERIC Educational Resources Information Center

    Dorrego, Maria Elena

    This discussion of programed instruction begins with the fundamental psychological aspects and learning theories behind this teaching method. Negative and positive reinforcement, conditioning, and their relationship to programed instruction are considered. Different types of programs, both linear and branching, are discussed; criticism of the…

  8. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  9. Interactive Classroom Graphics--Simulating Non-Linear Arrhenius Plots.

    ERIC Educational Resources Information Center

    Ben-Zion, M.; Hoz, S.

    1980-01-01

    Describes two simulation programs using an interactive graphic display terminal that were developed for a course in physical organic chemistry. Demonstrates the energetic conditions that give rise to deviations from linearity in the Arrhenius equation. (CS)

  10. Influence of a Levelness Defect in a Thrust Bearing on the Dynamic Behaviour of AN Elastic Shaft

    NASA Astrophysics Data System (ADS)

    BERGER, S.; BONNEAU, O.; FRÊNE, J.

    2002-01-01

    This paper examines the non-linear dynamic behaviour of a flexible shaft. The shaft is mounted on two journal bearings and the axial load is supported by a defective hydrodynamic thrust bearing at one end. The defect is a levelness defect of the rotor. The thrust bearing behaviour must be considered to be non-linear because of the effects of the defect. The shaft is modelled with typical beam finite elements including effects such as the gyroscopic effects. A modal technique is used to reduce the number of degrees of freedom. Results show that the thrust bearing defects introduce supplementary critical speeds. The linear approach is unable to show the supplementary critical speeds which are obtained only by using non-linear analysis.

  11. Alternative approaches for studying humanitarian interventions: propensity score methods to evaluate reintegration packages impact on depression, PTSD, and function impairment among child soldiers in Nepal.

    PubMed

    Kohrt, B A; Burkey, M; Stuart, E A; Koirala, S

    2015-01-01

    Ethical, logistical, and funding approaches preclude conducting randomized control trials (RCTs) in some humanitarian crises. A lack of RCTs and other intervention research has contributed to a limited evidence-base for mental health and psychosocial support (MHPS) programs after disasters, war, and disease outbreaks. Propensity score methods (PSMs) are an alternative analysis technique with potential application for evaluating MHPS programs in humanitarian emergencies. PSMs were used to evaluate impacts of education reintegration packages (ERPs) and other (vocational or economic) reintegration packages (ORPs) v. no reintegration programs on mental health of child soldiers. Propensity scores were used to determine weighting of child soldiers in each of the three treatment arms. Multiple linear regression was used to estimate adjusted changes in symptom score severity on culturally validated measures of depression, post-traumatic stress disorder (PTSD), and functional impairment from baseline to 1-year follow-up. Among 258 Nepali child soldiers participating in reintegration programs, 54.7% completed ERP and 22.9% completed ORP. There was a non-significant reduction in depression by 0.59 (95% CI -1.97 to 0.70) for ERP and by 0.60 (95% CI -2.16 to 0.96) for ORP compared with no treatment. There were non-significant increases in PTSD (1.15, 95% CI -1.55 to 3.86) and functional impairment (0.91, 95% CI -0.31 to 2.14) associated with ERP and similar findings for ORP (PTSD: 0.66, 95% CI -2.24 to 3.57; functional impairment (1.05, 95% CI -0.71 to 2.80). In a humanitarian crisis in which a non-randomized intervention assignment protocol was employed, the statistical technique of PSMs addressed differences in covariate distribution between child soldiers who received different integration packages. Our analysis did not demonstrate significant changes in psychosocial outcomes for ERPs and ORPs. We suggest the use of PSMs in evaluating non-randomized interventions in humanitarian crises when non-randomized conditions are not utilized.

  12. Soft tissue modelling through autowaves for surgery simulation.

    PubMed

    Zhong, Yongmin; Shirinzadeh, Bijan; Alici, Gursel; Smith, Julian

    2006-09-01

    Modelling of soft tissue deformation is of great importance to virtual reality based surgery simulation. This paper presents a new methodology for simulation of soft tissue deformation by drawing an analogy between autowaves and soft tissue deformation. The potential energy stored in a soft tissue as a result of a deformation caused by an external force is propagated among mass points of the soft tissue by non-linear autowaves. The novelty of the methodology is that (i) autowave techniques are established to describe the potential energy distribution of a deformation for extrapolating internal forces, and (ii) non-linear materials are modelled with non-linear autowaves other than geometric non-linearity. Integration with a haptic device has been achieved to simulate soft tissue deformation with force feedback. The proposed methodology not only deals with large-range deformations, but also accommodates isotropic, anisotropic and inhomogeneous materials by simply changing diffusion coefficients.

  13. Optimal GENCO bidding strategy

    NASA Astrophysics Data System (ADS)

    Gao, Feng

    Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.

  14. A comparative study between nonlinear regression and nonparametric approaches for modelling Phalaris paradoxa seedling emergence

    USDA-ARS?s Scientific Manuscript database

    Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...

  15. New techniques for the quantification and modeling of remotely sensed alteration and linear features in mineral resource assessment studies

    USGS Publications Warehouse

    Trautwein, C.M.; Rowan, L.C.

    1987-01-01

    Linear structural features and hydrothermally altered rocks that were interpreted from Landsat data have been used by the U.S. Geological Survey (USGS) in regional mineral resource appraisals for more than a decade. In the past, linear features and alterations have been incorporated into models for assessing mineral resources potential by manually overlaying these and other data sets. Recently, USGS research into computer-based geographic information systems (GIS) for mineral resources assessment programs has produced several new techniques for data analysis, quantification, and integration to meet assessment objectives.

  16. Non-linear eigensolver-based alternative to traditional SCF methods

    NASA Astrophysics Data System (ADS)

    Gavin, B.; Polizzi, E.

    2013-05-01

    The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.

  17. Linear and nonlinear stability of the Blasius boundary layer

    NASA Technical Reports Server (NTRS)

    Bertolotti, F. P.; Herbert, TH.; Spalart, P. R.

    1992-01-01

    Two new techniques for the study of the linear and nonlinear instability in growing boundary layers are presented. The first technique employs partial differential equations of parabolic type exploiting the slow change of the mean flow, disturbance velocity profiles, wavelengths, and growth rates in the streamwise direction. The second technique solves the Navier-Stokes equation for spatially evolving disturbances using buffer zones adjacent to the inflow and outflow boundaries. Results of both techniques are in excellent agreement. The linear and nonlinear development of Tollmien-Schlichting (TS) waves in the Blasius boundary layer is investigated with both techniques and with a local procedure based on a system of ordinary differential equations. The results are compared with previous work and the effects of non-parallelism and nonlinearity are clarified. The effect of nonparallelism is confirmed to be weak and, consequently, not responsible for the discrepancies between measurements and theoretical results for parallel flow.

  18. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  19. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    NASA Astrophysics Data System (ADS)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  20. Toward Control of Universal Scaling in Critical Dynamics

    DTIC Science & Technology

    2016-01-27

    program that aims to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Uwe Tauber Uwe C. T? uber , Michel Pleimling, Daniel J. Stilwell 611102 c. THIS PAGE The public reporting burden...to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi-component systems, namely

  1. Non-Linear Editing for the Smaller College-Level Production Program, Rev. 2.0.

    ERIC Educational Resources Information Center

    Tetzlaff, David

    This paper focuses on a specific topic and contention: Non-linear editing earns its place in a liberal arts setting because it is a superior tool to teach the concepts of how moving picture discourse is constructed through editing. The paper first points out that most students at small liberal arts colleges are not going to wind up working…

  2. A minimax technique for time-domain design of preset digital equalizers using linear programming

    NASA Technical Reports Server (NTRS)

    Vaughn, G. L.; Houts, R. C.

    1975-01-01

    A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.

  3. Using Log Linear Analysis for Categorical Family Variables.

    ERIC Educational Resources Information Center

    Moen, Phyllis

    The Goodman technique of log linear analysis is ideal for family research, because it is designed for categorical (non-quantitative) variables. Variables are dichotomized (for example, married/divorced, childless/with children) or otherwise categorized (for example, level of permissiveness, life cycle stage). Contingency tables are then…

  4. Revisiting Isotherm Analyses Using R: Comparison of Linear, Non-linear, and Bayesian Techniques

    EPA Science Inventory

    Extensive adsorption isotherm data exist for an array of chemicals of concern on a variety of engineered and natural sorbents. Several isotherm models exist that can accurately describe these data from which the resultant fitting parameters may subsequently be used in numerical ...

  5. Guidance of Nonlinear Nonminimum-Phase Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    The research work has advanced the inversion-based guidance theory for: systems with non-hyperbolic internal dynamics; systems with parameter jumps; and systems where a redesign of the output trajectory is desired. A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics was developed. This approach integrated stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics was used (a) to remove non-hyperbolicity which is an obstruction to applying stable inversion techniques and (b) to reduce large preactuation times needed to apply stable inversion for near non-hyperbolic cases. The method was applied to an example helicopter hover control problem with near non-hyperbolic internal dynamics for illustrating the trade-off between exact tracking and reduction of preactuation time. Future work will extend these results to guidance of nonlinear non-hyperbolic systems. The exact output tracking problem for systems with parameter jumps was considered. Necessary and sufficient conditions were derived for the elimination of switching-introduced output transient. While previous works had studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches), such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is also applicable to nonminimum-phase systems and leads to bounded but possibly non-causal solutions. In addition, for the case when the reference trajectories are generated by an exosystem, we developed an exact-tracking controller which could be written in a feedback form. As in standard regulator theory, we also obtained a linear map from the states of the exosystem to the desired system state, which was defined via a matrix differential equation.

  6. Linear and Non-linear Information Flows In Rainfall Field

    NASA Astrophysics Data System (ADS)

    Molini, A.; La Barbera, P.; Lanza, L. G.

    The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from measured data.

  7. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  8. An equivalent frequency approach for determining non-linear effects on pre-tensioned-cable cross-braced structures

    NASA Astrophysics Data System (ADS)

    Giaccu, Gian Felice

    2018-05-01

    Pre-tensioned cable braces are widely used as bracing systems in various structural typologies. This technology is fundamentally utilized for stiffening purposes in the case of steel and timber structures. The pre-stressing force imparted to the braces provides to the system a remarkable increment of stiffness. On the other hand, the pre-tensioning force in the braces must be properly calibrated in order to satisfactorily meet both serviceability and ultimate limit states. Dynamic properties of these systems are however affected by non-linear behavior due to potential slackening of the pre-tensioned brace. In the recent years the author has been working on a similar problem regarding the non-linear response of cables in cable-stayed bridges and braced structures. In the present paper a displacement-based approach is used to examine the non-linear behavior of a building system. The methodology operates through linearization and allows obtaining an equivalent linearized frequency to approximately characterize, mode by mode, the dynamic behavior of the system. The equivalent frequency depends on both the mechanical characteristics of the system, the pre-tensioning level assigned to the braces and a characteristic vibration amplitude. The proposed approach can be used as a simplified technique, capable of linearizing the response of structural systems, characterized by non-linearity induced by the slackening of pre-tensioned braces.

  9. Using genetic algorithms to determine near-optimal pricing, investment and operating strategies in the electric power industry

    NASA Astrophysics Data System (ADS)

    Wu, Dongjun

    Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.

  10. A simplified computer program for the prediction of the linear stability behavior of liquid propellant combustors

    NASA Technical Reports Server (NTRS)

    Mitchell, C. E.; Eckert, K.

    1979-01-01

    A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.

  11. Optical measurement of the weak non-linearity in the eardrum vibration response to auditory stimuli

    NASA Astrophysics Data System (ADS)

    Aerts, Johan

    The mammalian hearing organ consists of the external ear (auricle and ear canal) followed by the middle ear (eardrum and ossicles) and the inner ear (cochlea). Its function is to convert the incoming sound waves and convert them into nerve pulses which are processed in the final stage by the brain. The main task of the external and middle ear is to concentrate the incoming sound waves on a smaller surface to reduce the loss that would normally occur in transmission from air to inner ear fluid. In the past it has been shown that this is a linear process, thus without serious distortions, for sound waves going up to pressures of 130 dB SPL (˜90 Pa). However, at large pressure changes up to several kPa, the middle ear movement clearly shows non-linear behaviour. Thus, it is possible that some small non-linear distortions are also present in the middle ear vibration at lower sound pressures. In this thesis a sensitive measurement set-up is presented to detect this weak non-linear behaviour. Essentially, this set-up consists of a loud-speaker which excites the middle ear, and the resulting vibration is measured with an heterodyne vibrometer. The use of specially designed acoustic excitation signals (odd random phase multisines) enables the separation of the linear and non-linear response. The application of this technique on the middle ear demonstrates that there are already non-linear distortions present in the vibration of the middle ear at a sound pressure of 93 dB SPL. This non-linear component also grows strongly with increasing sound pressure. Knowledge of this non-linear component can contribute to the improvement of modern hearing aids, which operate at higher sound pressures where the non-linearities could distort the signal considerably. It is also important to know the contribution of middle ear non-linearity to otoacoustic emissions. This are non-linearities caused by the active feedback amplifier in the inner ear, and can be detected in the external and middle ear. These signals are used for diagnostic purposes, and therefore it is important to have an estimate the non-linear middle ear contribution to these emissions.

  12. Profiling a Mind Map User: A Descriptive Appraisal

    ERIC Educational Resources Information Center

    Tucker, Joanne M.; Armstrong, Gary R.; Massad, Victor J.

    2010-01-01

    Whether manually or through the use of software, a non-linear information organization framework known as mind mapping offers an alternative method for capturing thoughts, ideas and information to linear thinking modes such as outlining. Mind mapping is brainstorming, organizing, and problem solving. This paper examines mind mapping techniques,…

  13. Overcoming Learning Barriers through Knowledge Management

    ERIC Educational Resources Information Center

    Dror, Itiel E.; Makany, Tamas; Kemp, Jonathan

    2011-01-01

    The ability to learn highly depends on how knowledge is managed. Specifically, different techniques for note-taking utilize different cognitive processes and strategies. In this paper, we compared dyslexic and control participants when using linear and non-linear note-taking. All our participants were professionals working in the banking and…

  14. Magnetorotational Instability: Nonmodal Growth and the Relationship of Global Modes to the Shearing Box

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J Squire, A Bhattacharjee

    We study the magnetorotational instability (MRI) (Balbus & Hawley 1998) using non-modal stability techniques.Despite the spectral instability of many forms of the MRI, this proves to be a natural method of analysis that is well-suited to deal with the non-self-adjoint nature of the linear MRI equations. We find that the fastest growing linear MRI structures on both local and global domains can look very diff erent to the eigenmodes, invariably resembling waves shearing with the background flow (shear waves). In addition, such structures can grow many times faster than the least stable eigenmode over long time periods, and be localizedmore » in a completely di fferent region of space. These ideas lead – for both axisymmetric and non-axisymmetric modes – to a natural connection between the global MRI and the local shearing box approximation. By illustrating that the fastest growing global structure is well described by the ordinary diff erential equations (ODEs) governing a single shear wave, we find that the shearing box is a very sensible approximation for the linear MRI, contrary to many previous claims. Since the shear wave ODEs are most naturally understood using non-modal analysis techniques, we conclude by analyzing local MRI growth over finite time-scales using these methods. The strong growth over a wide range of wave-numbers suggests that non-modal linear physics could be of fundamental importance in MRI turbulence (Squire & Bhattacharjee 2014).« less

  15. Triple/quadruple patterning layout decomposition via novel linear programming and iterative rounding

    NASA Astrophysics Data System (ADS)

    Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.

    2016-03-01

    As feature size of the semiconductor technology scales down to 10nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL) and directed self assembly (DSA). Due to the delay of EUVL and EBL, triple and even quadruple patterning are considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, while it is forbidden for contact and via layers. In this paper, we focus on the application of layout decomposition where stitching is not allowed such as for contact and via layers. We propose a linear programming and iterative rounding (LPIR) solving technique to reduce the number of non-integers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.

  16. Surface and Atmospheric Parameter Retrieval From AVIRIS Data: The Importance of Non-Linear Effects

    NASA Technical Reports Server (NTRS)

    Green Robert O.; Moreno, Jose F.

    1996-01-01

    AVIRIS data represent a new and important approach for the retrieval of atmospheric and surface parameters from optical remote sensing data. Not only as a test for future space systems, but also as an operational airborne remote sensing system, the development of algorithms to retrieve information from AVIRIS data is an important step to these new approaches and capabilities. Many things have been learned since AVIRIS became operational, and the successive technical improvements in the hardware and the more sophisticated calibration techniques employed have increased the quality of the data to the point of almost meeting optimum user requirements. However, the potential capabilities of imaging spectrometry over the standard multispectral techniques have still not been fully demonstrated. Reasons for this are the technical difficulties in handling the data, the critical aspect of calibration for advanced retrieval methods, and the lack of proper models with which to invert the measured AVIRIS radiances in all the spectral channels. To achieve the potential of imaging spectrometry, these issues must be addressed. In this paper, an algorithm to retrieve information about both atmospheric and surface parameters from AVIRIS data, by using model inversion techniques, is described. Emphasis is put on the derivation of the model itself as well as proper inversion techniques, robust to noise in the data and an inadequate ability of the model to describe natural variability in the data. The problem of non-linear effects is addressed, as it has been demonstrated to be a major source of error in the numerical values retrieved by more simple, linear-based approaches. Non-linear effects are especially critical for the retrieval of surface parameters where both scattering and absorption effects are coupled, as well as in the cases of significant multiple-scattering contributions. However, sophisticated modeling approaches can handle such non-linear effects, which are especially important over vegetated surfaces. All the data used in this study were acquired during the 1991 Multisensor Airborne Campaign (MAC-Europe), as part of the European Field Experiment on a Desertification-threatened Area (EFEDA), carried out in Spain in June-July 1991.

  17. Survey of Non-Destructive Tire Inspection Techniques

    DOT National Transportation Integrated Search

    1971-07-01

    The status of several promising methods for non-destructive tire inspection is surveyed with the conclusion that radiographic, infrared, holographic and ultrasonic techniques warrant further evaluation. A program plan is outlined to correlate non-des...

  18. Intensity Biased PSP Measurement

    NASA Technical Reports Server (NTRS)

    Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.

    2000-01-01

    The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub 0)/I) and pressure ratio (P/P(sub 0)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and c quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an in- situ intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP)) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.

  19. Intensity Biased PSP Measurement

    NASA Technical Reports Server (NTRS)

    Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.

    2000-01-01

    The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub o)/I) and pressure ratio (P/P(sub o)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an insitu intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.

  20. A non-linear regression analysis program for describing electrophysiological data with multiple functions using Microsoft Excel.

    PubMed

    Brown, Angus M

    2006-04-01

    The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.

  1. Piezoelectric Non Linear Nanomechanical Temperature and Acceleration Insensitive Clocks (PENNTAC)

    DTIC Science & Technology

    2016-07-01

    requirements dictated by the Defense Advanced Research Agency (DARPA) program. Figure 7: Measured PN Response of the Non -linear 222 MHz AlN...wavelength (λ) are designed as supports for resonators in which the dimensions of the vibrating body are kept fixed. The Q extracted experimentally confirms...conditions. In this way, we are able to quantitatively predict Q due to anchor losses and qualitatively describe the trends observed experimentally

  2. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  3. Development of a computer technique for the prediction of transport aircraft flight profile sonic boom signatures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Coen, Peter G.

    1991-01-01

    A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.

  4. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  5. Software For Integer Programming

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1992-01-01

    Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.

  6. Characterizing driver-response relationships in marine pelagic ecosystems for improved ocean management.

    PubMed

    Hunsicker, Mary E; Kappel, Carrie V; Selkoe, Kimberly A; Halpern, Benjamin S; Scarborough, Courtney; Mease, Lindley; Amrhein, Alisan

    2016-04-01

    Scientists and resource managers often use methods and tools that assume ecosystem components respond linearly to environmental drivers and human stressors. However, a growing body of literature demonstrates that many relationships are-non-linear, where small changes in a driver prompt a disproportionately large ecological response. We aim to provide a comprehensive assessment of the relationships between drivers and ecosystem components to identify where and when non-linearities are likely to occur. We focused our analyses on one of the best-studied marine systems, pelagic ecosystems, which allowed us to apply robust statistical techniques on a large pool of previously published studies. In this synthesis, we (1) conduct a wide literature review on single driver-response relationships in pelagic systems, (2) use statistical models to identify the degree of non-linearity in these relationships, and (3) assess whether general patterns exist in the strengths and shapes of non-linear relationships across drivers. Overall we found that non-linearities are common in pelagic ecosystems, comprising at least 52% of all driver-response relation- ships. This is likely an underestimate, as papers with higher quality data and analytical approaches reported non-linear relationships at a higher frequency (on average 11% more). Consequently, in the absence of evidence for a linear relationship, it is safer to assume a relationship is non-linear. Strong non-linearities can lead to greater ecological and socioeconomic consequences if they are unknown (and/or unanticipated), but if known they may provide clear thresholds to inform management targets. In pelagic systems, strongly non-linear relationships are often driven by climate and trophodynamic variables but are also associated with local stressors, such as overfishing and pollution, that can be more easily controlled by managers. Even when marine resource managers cannot influence ecosystem change, they can use information about threshold responses to guide how other stressors are managed and to adapt to new ocean conditions. As methods to detect and reduce uncertainty around threshold values improve, managers will be able to better understand and account for ubiquitous non-linear relationships.

  7. Image reconstruction and scan configurations enabled by optimization-based algorithms in multispectral CT

    NASA Astrophysics Data System (ADS)

    Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan

    2017-11-01

    Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.

  8. Parameter and Structure Inference for Nonlinear Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Millonas, Mark

    2006-01-01

    A great many systems can be modeled in the non-linear dynamical systems framework, as x = f(x) + xi(t), where f() is the potential function for the system, and xi is the excitation noise. Modeling the potential using a set of basis functions, we derive the posterior for the basis coefficients. A more challenging problem is to determine the set of basis functions that are required to model a particular system. We show that using the Bayesian Information Criteria (BIC) to rank models, and the beam search technique, that we can accurately determine the structure of simple non-linear dynamical system models, and the structure of the coupling between non-linear dynamical systems where the individual systems are known. This last case has important ecological applications.

  9. Electro-Optical Sensing Apparatus and Method for Characterizing Free-Space Electromagnetic Radiation

    DOEpatents

    Zhang, Xi-Cheng; Libelo, Louis Francis; Wu, Qi

    1999-09-14

    Apparatus and methods for characterizing free-space electromagnetic energy, and in particular, apparatus/method suitable for real-time two-dimensional far-infrared imaging applications are presented. The sensing technique is based on a non-linear coupling between a low-frequency electric field and a laser beam in an electro-optic crystal. In addition to a practical counter-propagating sensing technique, a co-linear approach is described which provides longer radiated field--optical beam interaction length, thereby making imaging applications practical.

  10. ELAS: A general-purpose computer program for the equilibrium problems of linear structures. Volume 2: Documentation of the program. [subroutines and flow charts

    NASA Technical Reports Server (NTRS)

    Utku, S.

    1969-01-01

    A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.

  11. Optimising the extraction rate of a non-durable non-renewable resource in a monopolistic market: a mathematical programming approach.

    PubMed

    Corominas, Albert; Fossas, Enric

    2015-01-01

    We assume a monopolistic market for a non-durable non-renewable resource such as crude oil, phosphates or fossil water. Stating the problem of obtaining optimal policies on extraction and pricing of the resource as a non-linear program allows general conclusions to be drawn under diverse assumptions about the demand curve, discount rates and length of the planning horizon. We compare the results with some common beliefs about the pace of exhaustion of this kind of resources.

  12. PAN AIR: A Computer Program for Predicting Subsonic or Supersonic Linear Potential Flows About Arbitrary Configurations Using a Higher Order Panel Method. Volume 1; Theory Document (Version 1.1)

    NASA Technical Reports Server (NTRS)

    Magnus, Alfred E.; Epton, Michael A.

    1981-01-01

    An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the PAN AIR (Panel Aerodynamics) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformations, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments.

  13. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  14. Factors predicting health practitioners' awareness of UNHS program in Malaysian non-public hospitals.

    PubMed

    Ismail, Abdussalaam Iyanda; Abdul Majid, Abdul Halim; Zakaria, Mohd Normani; Abdullah, Nor Azimah Chew; Hamzah, Sulaiman; Mukari, Siti Zamratol-Mai Sarah

    2018-06-01

    The current study aims to examine the effects of human resource (measured with the perception of health workers' perception towards UNHS), screening equipment, program layout and screening techniques on healthcare practitioners' awareness (measured with knowledge) of universal newborn hearing screening (UNHS) in Malaysian non-public hospitals. Via cross sectional approach, the current study collected data using a validated questionnaire to obtain information on the awareness of UNHS program among the health practitioners and to test the formulated hypotheses. 51, representing 81% response rate, out of 63 questionnaires distributed to the health professionals were returned and usable for statistical analysis. The survey instruments involving healthcare practitioners' awareness, human resource, program layout, screening instrument, and screening techniques instruments were adapted and scaled with 7-point Likert scale ranging from 1 (little) to 7 (many). Partial Least Squares (PLS) algorithm and bootstrapping techniques were employed to test the hypotheses of the study. With the result involving beta values, t-values and p-values (i.e. β=0.478, t=1.904, p<0.10; β=0.809, t=3.921, p<0.01; β= -0.436, t=1.870, p<0.10), human resource, measured with training, functional equipment and program layout, are held to be significant predictors of enhanced knowledge of health practitioners. Likewise, program layout, human resource, screening technique and screening instrument explain 71% variance in health practitioners' awareness. Health practitioners' awareness is explained by program layout, human resource, and screening instrument with effect size (f2) of 0.065, 0.621, and 0.211 respectively, indicating that program layout, human resource, and screening instrument have small, large and medium effect size on health practitioners' awareness respectively. However, screening technique has zero effect on health practitioners' awareness, indicating the reason why T-statistics is not significant. Having started the UNHS program in 2003, non-public hospitals have more experienced and well-trained employees dealing with the screening tools and instrument, and the program layout is well structured in the hospitals. Yet, the issue of homogeneity exists. Non-public hospitals charge for the service they render, and, in turn, they would ensure quality service, given that they are profit-driven and/or profit-making establishments, and that they would have no option other than provision of value-added and innovative services. The employees in the non-public hospitals have less screening to carry out, given the low number of babies delivered in the private hospitals. In addition, non-significant relationship between screening techniques and healthcare practitioners' awareness of UNHS program is connected with the fact that the techniques that are practiced among public and non-public hospital are similar and standardized. Limitations and suggestions were discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Non-linear Analysis of Scalp EEG by Using Bispectra: The Effect of the Reference Choice

    PubMed Central

    Chella, Federico; D'Andrea, Antea; Basti, Alessio; Pizzella, Vittorio; Marzetti, Laura

    2017-01-01

    Bispectral analysis is a signal processing technique that makes it possible to capture the non-linear and non-Gaussian properties of the EEG signals. It has found various applications in EEG research and clinical practice, including the assessment of anesthetic depth, the identification of epileptic seizures, and more recently, the evaluation of non-linear cross-frequency brain functional connectivity. However, the validity and reliability of the indices drawn from bispectral analysis of EEG signals are potentially biased by the use of a non-neutral EEG reference. The present study aims at investigating the effects of the reference choice on the analysis of the non-linear features of EEG signals through bicoherence, as well as on the estimation of cross-frequency EEG connectivity through two different non-linear measures, i.e., the cross-bicoherence and the antisymmetric cross-bicoherence. To this end, four commonly used reference schemes were considered: the vertex electrode (Cz), the digitally linked mastoids, the average reference, and the Reference Electrode Standardization Technique (REST). The reference effects were assessed both in simulations and in a real EEG experiment. The simulations allowed to investigated: (i) the effects of the electrode density on the performance of the above references in the estimation of bispectral measures; and (ii) the effects of the head model accuracy in the performance of the REST. For real data, the EEG signals recorded from 10 subjects during eyes open resting state were examined, and the distortions induced by the reference choice in the patterns of alpha-beta bicoherence, cross-bicoherence, and antisymmetric cross-bicoherence were assessed. The results showed significant differences in the findings depending on the chosen reference, with the REST providing superior performance than all the other references in approximating the ideal neutral reference. In conclusion, this study highlights the importance of considering the effects of the reference choice in the interpretation and comparison of the results of bispectral analysis of scalp EEG. PMID:28559790

  16. Status of ADRIANO R&D in T1015 Collaboration

    DOE PAGES

    Gatto, Corrado; Di Benedetto, V.; Mazzacane, A.

    2015-02-13

    The physics program for future High Energy and High Intensity experiments requires an energy resolution of the calorimetric component of detectors at limits of traditional techniques and an excellent particle identification. The novel ADRIANO technology (A Dualreadout Integrally Active Non-segmented Option), currently under development at Fermilab, is showing excellent performance on those respects. Results from detailed Monte Carlo studies on the performance with respect to energy resolution, linear response and transverse containment and a preliminary optimization of the layout are presented. A baseline configuration is chosen with an estimated energy resolution of σ(E)/E ≈ 30%/√E , to support an extensivemore » R&D program recently started by T1015 Collaboration at Fermilab. Furthermore, preliminary results from several test beams at the Fermilab Test Beam Facility (FTBF) of a ~ 1λI prototype are presented. Future prospects with ultra-heavy glass are, also, summarized.« less

  17. A New Stochastic Equivalent Linearization Implementation for Prediction of Geometrically Nonlinear Vibrations

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.

    1999-01-01

    In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.

  18. Is 3D true non linear traveltime tomography reasonable ?

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Virieux, J.

    2003-04-01

    The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.

  19. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    PubMed Central

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  20. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    NASA Astrophysics Data System (ADS)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  1. STICAP: A linear circuit analysis program with stiff systems capability. Volume 1: Theory manual. [network analysis

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.

    1975-01-01

    STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.

  2. Primal-dual techniques for online algorithms and mechanisms

    NASA Astrophysics Data System (ADS)

    Liaghat, Vahid

    An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.

  3. Monitoring stress related velocity variation in concrete with a 2 x 10(-5) relative resolution using diffuse ultrasound.

    PubMed

    Larose, Eric; Hall, Stephen

    2009-04-01

    Ultrasonic waves propagating in solids have stress-dependent velocities. The relation between stress (or strain) and velocity forms the basis of non-linear acoustics. In homogeneous solids, conventional time-of-flight techniques have measured this dependence with spectacular precision. In heterogeneous media such as concrete, the direct (ballistic) wave around 500 kHz is strongly attenuated and conventional techniques are less efficient. In this manuscript, the effect of weak stress changes on the late arrivals constituting the acoustic diffuse coda is tracked. A resolution of 2 x 10(-5) in relative velocity change is attained which corresponds to a sensitivity to stress change of better than 50 kPa. Therefore, the technique described here provides an original way to measure the non-linear parameter with stress variations on the order of tens of kPa.

  4. Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation

    NASA Astrophysics Data System (ADS)

    Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.

    A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.

  5. Simulation of white light generation and near light bullets using a novel numerical technique

    NASA Astrophysics Data System (ADS)

    Zia, Haider

    2018-01-01

    An accurate and efficient simulation has been devised, employing a new numerical technique to simulate the derivative generalised non-linear Schrödinger equation in all three spatial dimensions and time. The simulation models all pertinent effects such as self-steepening and plasma for the non-linear propagation of ultrafast optical radiation in bulk material. Simulation results are compared to published experimental spectral data of an example ytterbium aluminum garnet system at 3.1 μm radiation and fits to within a factor of 5. The simulation shows that there is a stability point near the end of the 2 mm crystal where a quasi-light bullet (spatial temporal soliton) is present. Within this region, the pulse is collimated at a reduced diameter (factor of ∼2) and there exists a near temporal soliton at the spatial center. The temporal intensity within this stable region is compressed by a factor of ∼4 compared to the input. This study shows that the simulation highlights new physical phenomena based on the interplay of various linear, non-linear and plasma effects that go beyond the experiment and is thus integral to achieving accurate designs of white light generation systems for optical applications. An adaptive error reduction algorithm tailor made for this simulation will also be presented in appendix.

  6. A Technique of Treating Negative Weights in WENO Schemes

    NASA Technical Reports Server (NTRS)

    Shi, Jing; Hu, Changqing; Shu, Chi-Wang

    2000-01-01

    High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.

  7. Application of non-linear dynamics to the characterization of cardiac electrical instability

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Cohen, R. J.

    1987-01-01

    Beat-to-beat alternation in the morphology of the ECG has been previously observed in hearts susceptible to fibrillation. In addition, fibrillation has been characterized by some as a chaotic state. Period doubling phenomena, such as alternation, and the onset of chaos have been connected by non-linear dynamical systems theory. In this paper, we describe the use of a technique from nonlinear dynamics theory, the construction of a first return nap, to assess the susceptibility to fibrillation threshhold in canine experiments.

  8. Computer program for post-flight evaluation of the control surface response for an attitude controlled missile

    NASA Technical Reports Server (NTRS)

    Knauber, R. N.

    1982-01-01

    A FORTRAN IV coded computer program is presented for post-flight analysis of a missile's control surface response. It includes preprocessing of digitized telemetry data for time lags, biases, non-linear calibration changes and filtering. Measurements include autopilot attitude rate and displacement gyro output and four control surface deflections. Simple first order lags are assumed for the pitch, yaw and roll axes of control. Each actuator is also assumed to be represented by a first order lag. Mixing of pitch, yaw and roll commands to four control surfaces is assumed. A pseudo-inverse technique is used to obtain the pitch, yaw and roll components from the four measured deflections. This program has been used for over 10 years on the NASA/SCOUT launch vehicle for post-flight analysis and was helpful in detecting incipient actuator stall due to excessive hinge moments. The program is currently set up for a CDC CYBER 175 computer system. It requires 34K words of memory and contains 675 cards. A sample problem presented herein including the optional plotting requires eleven (11) seconds of central processor time.

  9. Incomplete data based parameter identification of nonlinear and time-variant oscillators with fractional derivative elements

    NASA Astrophysics Data System (ADS)

    Kougioumtzoglou, Ioannis A.; dos Santos, Ketson R. M.; Comerford, Liam

    2017-09-01

    Various system identification techniques exist in the literature that can handle non-stationary measured time-histories, or cases of incomplete data, or address systems following a fractional calculus modeling. However, there are not many (if any) techniques that can address all three aforementioned challenges simultaneously in a consistent manner. In this paper, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear and time-variant oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear sub-systems. In this regard, the oscillator is transformed into an equivalent MISO system in the wavelet domain. Next, a recently developed L1-norm minimization procedure based on compressive sensing theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Finally, these wavelet coefficients are utilized to determine appropriately defined time- and frequency-dependent wavelet based frequency response functions and related oscillator parameters. Several linear and nonlinear time-variant systems with fractional derivative elements are used as numerical examples to demonstrate the reliability of the technique even in cases of noise corrupted and incomplete data.

  10. A linear model fails to predict orientation selectivity of cells in the cat visual cortex.

    PubMed Central

    Volgushev, M; Vidyasagar, T R; Pei, X

    1996-01-01

    1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828

  11. A novel approach for analyzing fuzzy system reliability using different types of intuitionistic fuzzy failure rates of components.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-03-01

    This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Watts, Stephen R.

    1995-01-01

    This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.

  13. Effective techniques for the identification and accommodation of disturbances

    NASA Technical Reports Server (NTRS)

    Johnson, C. D.

    1989-01-01

    The successful control of dynamic systems such as space stations, or launch vehicles, requires a controller design methodology that acknowledges and addresses the disruptive effects caused by external and internal disturbances that inevitably act on such systems. These disturbances, technically defined as uncontrollable inputs, typically vary with time in an uncertain manner and usually cannot be directly measured in real time. A relatively new non-statistical technique for modeling, and (on-line) identification, of those complex uncertain disturbances that are not as erratic and capricious as random noise is described. This technique applies to multi-input cases and to many of the practical disturbances associated with the control of space stations, or launch vehicles. Then, a collection of smart controller design techniques that allow controlled dynamic systems, with possible multi-input controls, to accommodate (cope with) such disturbances with extraordinary effectiveness are associated. These new smart controllers are designed by non-statistical techniques and typically turn out to be unconventional forms of dynamic linear controllers (compensators) with constant coefficients. The simplicity and reliability of linear, constant coefficient controllers is well-known in the aerospace field.

  14. Preprocessing of 2-Dimensional Gel Electrophoresis Images Applied to Proteomic Analysis: A Review.

    PubMed

    Goez, Manuel Mauricio; Torres-Madroñero, Maria Constanza; Röthlisberger, Sarah; Delgado-Trejos, Edilson

    2018-02-01

    Various methods and specialized software programs are available for processing two-dimensional gel electrophoresis (2-DGE) images. However, due to the anomalies present in these images, a reliable, automated, and highly reproducible system for 2-DGE image analysis has still not been achieved. The most common anomalies found in 2-DGE images include vertical and horizontal streaking, fuzzy spots, and background noise, which greatly complicate computational analysis. In this paper, we review the preprocessing techniques applied to 2-DGE images for noise reduction, intensity normalization, and background correction. We also present a quantitative comparison of non-linear filtering techniques applied to synthetic gel images, through analyzing the performance of the filters under specific conditions. Synthetic proteins were modeled into a two-dimensional Gaussian distribution with adjustable parameters for changing the size, intensity, and degradation. Three types of noise were added to the images: Gaussian, Rayleigh, and exponential, with signal-to-noise ratios (SNRs) ranging 8-20 decibels (dB). We compared the performance of wavelet, contourlet, total variation (TV), and wavelet-total variation (WTTV) techniques using parameters SNR and spot efficiency. In terms of spot efficiency, contourlet and TV were more sensitive to noise than wavelet and WTTV. Wavelet worked the best for images with SNR ranging 10-20 dB, whereas WTTV performed better with high noise levels. Wavelet also presented the best performance with any level of Gaussian noise and low levels (20-14 dB) of Rayleigh and exponential noise in terms of SNR. Finally, the performance of the non-linear filtering techniques was evaluated using a real 2-DGE image with previously identified proteins marked. Wavelet achieved the best detection rate for the real image. Copyright © 2018 Beijing Institute of Genomics, Chinese Academy of Sciences and Genetics Society of China. Production and hosting by Elsevier B.V. All rights reserved.

  15. Modeling susceptibility difference artifacts produced by metallic implants in magnetic resonance imaging with point-based thin-plate spline image registration.

    PubMed

    Pauchard, Y; Smith, M; Mintchev, M

    2004-01-01

    Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.

  16. Heat Transfer Search Algorithm for Non-convex Economic Dispatch Problems

    NASA Astrophysics Data System (ADS)

    Hazra, Abhik; Das, Saborni; Basu, Mousumi

    2018-06-01

    This paper presents Heat Transfer Search (HTS) algorithm for the non-linear economic dispatch problem. HTS algorithm is based on the law of thermodynamics and heat transfer. The proficiency of the suggested technique has been disclosed on three dissimilar complicated economic dispatch problems with valve point effect; prohibited operating zone; and multiple fuels with valve point effect. Test results acquired from the suggested technique for the economic dispatch problem have been fitted to that acquired from other stated evolutionary techniques. It has been observed that the suggested HTS carry out superior solutions.

  17. Heat Transfer Search Algorithm for Non-convex Economic Dispatch Problems

    NASA Astrophysics Data System (ADS)

    Hazra, Abhik; Das, Saborni; Basu, Mousumi

    2018-03-01

    This paper presents Heat Transfer Search (HTS) algorithm for the non-linear economic dispatch problem. HTS algorithm is based on the law of thermodynamics and heat transfer. The proficiency of the suggested technique has been disclosed on three dissimilar complicated economic dispatch problems with valve point effect; prohibited operating zone; and multiple fuels with valve point effect. Test results acquired from the suggested technique for the economic dispatch problem have been fitted to that acquired from other stated evolutionary techniques. It has been observed that the suggested HTS carry out superior solutions.

  18. Physiological processes non-linearly affect electrophysiological recordings during transcranial electric stimulation.

    PubMed

    Noury, Nima; Hipp, Joerg F; Siegel, Markus

    2016-10-15

    Transcranial electric stimulation (tES) is a promising tool to non-invasively manipulate neuronal activity in the human brain. Several studies have shown behavioral effects of tES, but stimulation artifacts complicate the simultaneous investigation of neural activity with EEG or MEG. Here, we first show for EEG and MEG, that contrary to previous assumptions, artifacts do not simply reflect stimulation currents, but that heartbeat and respiration non-linearly modulate stimulation artifacts. These modulations occur irrespective of the stimulation frequency, i.e. during both transcranial alternating and direct current stimulations (tACS and tDCS). Second, we show that, although at first sight previously employed artifact rejection methods may seem to remove artifacts, data are still contaminated by non-linear stimulation artifacts. Because of their complex nature and dependence on the subjects' physiological state, these artifacts are prone to be mistaken as neural entrainment. In sum, our results uncover non-linear tES artifacts, show that current techniques fail to fully remove them, and pave the way for new artifact rejection methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Second Law of Thermodynamics Applied to Metabolic Networks

    NASA Technical Reports Server (NTRS)

    Nigam, R.; Liang, S.

    2003-01-01

    We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.

  20. A computational algorithm for spacecraft control and momentum management

    NASA Technical Reports Server (NTRS)

    Dzielski, John; Bergmann, Edward; Paradiso, Joseph

    1990-01-01

    Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.

  1. The Computer in Educational Decision Making. An Introduction and Guide for School Administrators.

    ERIC Educational Resources Information Center

    Sanders, Susan; And Others

    This text provides educational administrators with a working knowledge of the problem-solving techniques of PERT (planning, evaluation, and review technique), Linear Programming, Queueing Theory, and Simulation. The text includes an introduction to decision-making and operations research, four chapters consisting of indepth explanations of each…

  2. Analytical aids in land management planning

    Treesearch

    David R. Betters

    1978-01-01

    Quantitative techniques may be applied to aid in completing various phases of land management planning. Analytical procedures which have been used include a procedure for public involvement, PUBLIC; a matrix information generator, MAGE5; an allocation procedure, linear programming (LP); and an input-output economic analysis (EA). These techniques have proven useful in...

  3. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    DOE PAGES

    Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...

    2015-12-10

    We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less

  4. Non-linear associations between laryngo-pharyngeal symptoms of gastro-oesophageal reflux disease: clues from artificial intelligence analysis

    PubMed Central

    Grossi, E

    2006-01-01

    Summary The relationship between the different symptoms of gastro-oesophageal reflux disease remain markedly obscure due to the high underlying non-linearity and the lack of studies focusing on the problem. Aim of this study was to evaluate the hidden relationships between the triad of symptoms related to gastro-oesophageal reflux disease using advanced mathematical techniques, borrowed from the artificial intelligence field, in a cohort of patients with oesophagitis. A total of 388 patients (from 60 centres) with endoscopic evidence of oesophagitis were recruited. The severity of oesophagitis was scored by means of the Savary-Miller classification. PST algorithm was employed. This study shows that laryngo-pharyngeal symptoms related to gastro-oesophageal reflux disease are correlated even if in a non-linear way. PMID:17345935

  5. Non-linear associations between laryngo-pharyngeal symptoms of gastro-oesophageal reflux disease: clues from artificial intelligence analysis.

    PubMed

    Grossi, E

    2006-10-01

    The relationship between the different symptoms of gastro-oesophageal reflux disease remain markedly obscure due to the high underlying non-linearity and the lack of studies focusing on the problem. Aim of this study was to evaluate the hidden relationships between the triad of symptoms related to gastro-oesophageal reflux disease using advanced mathematical techniques, borrowed from the artificial intelligence field, in a cohort of patients with oesophagitis. A total of 388 patients (from 60 centres) with endoscopic evidence of oesophagitis were recruited. The severity of oesophagitis was scored by means of the Savary-Miller classification. PST algorithm was employed. This study shows that laryngo-pharyngeal symptoms related to gastro-oesophageal reflux disease are correlated even if in a non-linear way.

  6. L-O-S-T: Logging Optimization Selection Technique

    Treesearch

    Jerry L. Koger; Dennis B. Webster

    1984-01-01

    L-O-S-T is a FORTRAN computer program developed to systematically quantify, analyze, and improve user selected harvesting methods. Harvesting times and costs are computed for road construction, landing construction, system move between landings, skidding, and trucking. A linear programming formulation utilizing the relationships among marginal analysis, isoquants, and...

  7. Duality in non-linear programming

    NASA Astrophysics Data System (ADS)

    Jeyalakshmi, K.

    2018-04-01

    In this paper we consider duality and converse duality for a programming problem involving convex objective and constraint functions with finite dimensional range. We do not assume any constraint qualification. The dual is presented by reducing the problem to a standard Lagrange multiplier problem.

  8. Analysis of randomly time varying systems by gaussian closure technique

    NASA Astrophysics Data System (ADS)

    Dash, P. K.; Iyengar, R. N.

    1982-07-01

    The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.

  9. Algorithmic Trading with Developmental and Linear Genetic Programming

    NASA Astrophysics Data System (ADS)

    Wilson, Garnett; Banzhaf, Wolfgang

    A developmental co-evolutionary genetic programming approach (PAM DGP) and a standard linear genetic programming (LGP) stock trading systemare applied to a number of stocks across market sectors. Both GP techniques were found to be robust to market fluctuations and reactive to opportunities associated with stock price rise and fall, with PAMDGP generating notably greater profit in some stock trend scenarios. Both algorithms were very accurate at buying to achieve profit and selling to protect assets, while exhibiting bothmoderate trading activity and the ability to maximize or minimize investment as appropriate. The content of the trading rules produced by both algorithms are also examined in relation to stock price trend scenarios.

  10. A new neural network model for solving random interval linear programming problems.

    PubMed

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Electro-optical and Magneto-optical Sensing Apparatus and Method for Characterizing Free-space Electromagnetic Radiation

    DOEpatents

    Zhang, Xi-Cheng; Riordan, Jenifer Ann; Sun, Feng-Guo

    2000-08-29

    Apparatus and methods for characterizing free-space electromagnetic energy, and in particular, apparatus/method suitable for real-time two-dimensional far-infrared imaging applications are presented. The sensing technique is based on a non-linear coupling between a low-frequency electric (or magnetic) field and a laser beam in an electro-optic (or magnetic-optic) crystal. In addition to a practical counter-propagating sensing technique, a co-linear approach is described which provides longer radiated field-optical beam interaction length, thereby making imaging applications practical.

  12. Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.

    PubMed

    Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K

    2011-01-01

    We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.

  13. Macrocell path loss prediction using artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Usman, Abraham U.; Okereke, Okpo U.; Omizegba, Elijah E.

    2014-04-01

    The prediction of propagation loss is a practical non-linear function approximation problem which linear regression or auto-regression models are limited in their ability to handle. However, some computational Intelligence techniques such as artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) have been shown to have great ability to handle non-linear function approximation and prediction problems. In this study, the multiple layer perceptron neural network (MLP-NN), radial basis function neural network (RBF-NN) and an ANFIS network were trained using actual signal strength measurement taken at certain suburban areas of Bauchi metropolis, Nigeria. The trained networks were then used to predict propagation losses at the stated areas under differing conditions. The predictions were compared with the prediction accuracy of the popular Hata model. It was observed that ANFIS model gave a better fit in all cases having higher R2 values in each case and on average is more robust than MLP and RBF models as it generalises better to a different data.

  14. Free-piston engine linear generator for hybrid vehicles modeling study

    NASA Astrophysics Data System (ADS)

    Callahan, T. J.; Ingram, S. K.

    1995-05-01

    Development of a free piston engine linear generator was investigated for use as an auxiliary power unit for a hybrid electric vehicle. The main focus of the program was to develop an efficient linear generator concept to convert the piston motion directly into electrical power. Computer modeling techniques were used to evaluate five different designs for linear generators. These designs included permanent magnet generators, reluctance generators, linear DC generators, and two and three-coil induction generators. The efficiency of the linear generator was highly dependent on the design concept. The two-coil induction generator was determined to be the best design, with an efficiency of approximately 90 percent.

  15. Nonlinear random response prediction using MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.

    1993-01-01

    An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.

  16. Dynamic Programming for Structured Continuous Markov Decision Problems

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu

    2004-01-01

    We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.

  17. Interior point techniques for LP and NLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evtushenko, Y.

    By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.

  18. Estimation of hysteretic damping of structures by stochastic subspace identification

    NASA Astrophysics Data System (ADS)

    Bajrić, Anela; Høgsberg, Jan

    2018-05-01

    Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.

  19. Generalized Heisenberg algebra and (non linear) pseudo-bosons

    NASA Astrophysics Data System (ADS)

    Bagarello, F.; Curado, E. M. F.; Gazeau, J. P.

    2018-04-01

    We propose a deformed version of the generalized Heisenberg algebra by using techniques borrowed from the theory of pseudo-bosons. In particular, this analysis is relevant when non self-adjoint Hamiltonians are needed to describe a given physical system. We also discuss relations with nonlinear pseudo-bosons. Several examples are discussed.

  20. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  1. Non-linear HRV indices under autonomic nervous system blockade.

    PubMed

    Bolea, Juan; Pueyo, Esther; Laguna, Pablo; Bailón, Raquel

    2014-01-01

    Heart rate variability (HRV) has been studied as a non-invasive technique to characterize the autonomic nervous system (ANS) regulation of the heart. Non-linear methods based on chaos theory have been used during the last decades as markers for risk stratification. However, interpretation of these nonlinear methods in terms of sympathetic and parasympathetic activity is not fully established. In this work we study linear and non-linear HRV indices during ANS blockades in order to assess their relation with sympathetic and parasympathetic activities. Power spectral content in low frequency (0.04-0.15 Hz) and high frequency (0.15-0.4 Hz) bands of HRV, as well as correlation dimension, sample and approximate entropies were computed in a database of subjects during single and dual ANS blockade with atropine and/or propranolol. Parasympathetic blockade caused a significant decrease in the low and high frequency power of HRV, as well as in correlation dimension and sample and approximate entropies. Sympathetic blockade caused a significant increase in approximate entropy. Sympathetic activation due to postural change from supine to standing caused a significant decrease in all the investigated non-linear indices and a significant increase in the normalized power in the low frequency band. The other investigated linear indices did not show significant changes. Results suggest that parasympathetic activity has a direct relation with sample and approximate entropies.

  2. Mission Operations Planning with Preferences: An Empirical Study

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Khatib, Lina; McGann, Conor

    2006-01-01

    This paper presents an empirical study of some nonexhaustive approaches to optimizing preferences within the context of constraint-based, mixed-initiative planning for mission operations. This work is motivated by the experience of deploying and operating the MAPGEN (Mixed-initiative Activity Plan GENerator) system for the Mars Exploration Rover Mission. Responsiveness to the user is one of the important requirements for MAPGEN, hence, the additional computation time needed to optimize preferences must be kept within reasonabble bounds. This was the primary motivation for studying non-exhaustive optimization approaches. The specific goals of rhe empirical study are to assess the impact on solution quality of two greedy heuristics used in MAPGEN and to assess the improvement gained by applying a linear programming optimization technique to the final solution.

  3. Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements

    PubMed Central

    Besada-Portas, Eva; Lopez-Orozco, Jose A.; Lanillos, Pablo; de la Cruz, Jesus M.

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost. PMID:22736962

  4. Localization of non-linearly modeled autonomous mobile robots using out-of-sequence measurements.

    PubMed

    Besada-Portas, Eva; Lopez-Orozco, Jose A; Lanillos, Pablo; de la Cruz, Jesus M

    2012-01-01

    This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.

  5. Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP

    NASA Astrophysics Data System (ADS)

    Russo, A.; Trigo, R. M.

    2003-04-01

    A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}

  6. Solid state high resolution multi-spectral imager CCD test phase

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The program consisted of measuring the performance characteristics of charge coupled linear imaging devices, and a study defining a multispectral imaging system employing advanced solid state photodetection techniques.

  7. High Power Storage System Based on Thin Film Solid Ionics.

    DTIC Science & Technology

    1988-02-01

    linear sweep voltametry (LSV) technique (Dahn and Hearing, 1981). We observe that in non-annealed film the peak at 1.2 V Is very strong compared to that...1.8V. The redox stability range has been determined by cyclic voltametry for different preparation conditions of the films. Lithium solid state hybrid...Fig. 6 Linear sweep voltagrams at 7gV/s rate of InSe films prepared at Ts=RT (a) non-annealed, (b) annealed at 475 K during 64 hours. 11 1 -’ 1 J, -I

  8. Generation of High Purity Photon-Pair in a Short Highly Non-Linear Fiber

    DTIC Science & Technology

    2013-01-01

    Avalanche photodiode. A 10 m long HNLF fabricated by Sumitomo with a core diameter of 4 microns is fusion spliced to a single mode fiber for a...parametric down conversion (SPDC) was first observed in χ(2) nonlinear crystal [3]. However, the compatibility of a nonlinear crystal source with fiber and...PAIR IN A SHORT HIGHLY NON-LINEAR FIBER 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA8750-12-1-0136 5c. PROGRAM ELEMENT NUMBER N/A 6. AUTHOR(S

  9. Optimum Damping in a Non-Linear Base Isolation System

    NASA Astrophysics Data System (ADS)

    Jangid, R. S.

    1996-02-01

    Optimum isolation damping for minimum acceleration of a base-isolated structure subjected to earthquake ground excitation is investigated. The stochastic model of the El-Centro1940 earthquake, which preserves the non-stationary evolution of amplitude and frequency content of ground motion, is used as an earthquake excitation. The base isolated structure consists of a linear flexible shear type multi-storey building supported on a base isolation system. The resilient-friction base isolator (R-FBI) is considered as an isolation system. The non-stationary stochastic response of the system is obtained by the time dependent equivalent linearization technique as the force-deformation of the R-FBI system is non-linear. The optimum damping of the R-FBI system is obtained under important parametric variations; i.e., the coefficient of friction of the R-FBI system, the period and damping of the superstructure; the effective period of base isolation. The criterion selected for optimality is the minimization of the top floor root mean square (r.m.s.) acceleration. It is shown that the above parameters have significant effects on optimum isolation damping.

  10. Multivariable optimization of liquid rocket engines using particle swarm algorithms

    NASA Astrophysics Data System (ADS)

    Jones, Daniel Ray

    Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.

  11. Resistive and Capacitive Memory Effects in Oxide Insulator/ Oxide Conductor Hetero-Structures

    NASA Astrophysics Data System (ADS)

    Meyer, Rene; Miao, Maosheng; Wu, Jian; Chevallier, Christophe

    2013-03-01

    We report resistive and capacitive memory effects observed in oxide insulator/ oxide conductor hetero-structures. Electronic transport properties of Pt/ZrO2/PCMO/Pt structures with ZrO2 thicknesses ranging from 20A to 40A are studied before and after applying short voltage pulses of positive and negative polarity for set and reset operation. As processed devices display a non-linear IV characteristic which we attribute to trap assisted tunneling through the ZrO2 tunnel oxide. Current scaling with electrode area and tunnel oxide thickness confirms uniform conduction. The set/reset operation cause an up/down shift of the IV characteristic indicating that the conduction mechanism of both states is still dominated by tunneling. A change in the resistance is associated with a capacitance change of the device. An exponential relation between program voltages and set times is found. A model based on electric field mediated non-linear transport of oxygen ions across the ZrO2/PCMO interface is proposed. The change in the tunnel current is explained by ionic charge transfer between tunnel oxide and conductive metal oxide changing both tunnel barrier height and PCMO conductivity. DFT techniques are employed to explain the conductivity change in the PCMO interfacial layer observed through capacitance measurements.

  12. A digital strategy for manometer dynamic enhancement. [for wind tunnel monitoring

    NASA Technical Reports Server (NTRS)

    Stoughton, J. W.

    1978-01-01

    Application of digital signal processing techniques to improve the non-linear dynamic characteristics of a sonar-type mercury manometer is described. The dynamic enhancement strategy quasi-linearizes the manometer characteristics and improves the effective bandwidth in the context of a wind-tunnel pressure regulation system. Model identification data and real-time hybrid simulation data demonstrate feasibility of approach.

  13. Implementation of Nonlinear Control Laws for an Optical Delay Line

    NASA Technical Reports Server (NTRS)

    Hench, John J.; Lurie, Boris; Grogan, Robert; Johnson, Richard

    2000-01-01

    This paper discusses the implementation of a globally stable nonlinear controller algorithm for the Real-Time Interferometer Control System Testbed (RICST) brassboard optical delay line (ODL) developed for the Interferometry Technology Program at the Jet Propulsion Laboratory. The control methodology essentially employs loop shaping to implement linear control laws. while utilizing nonlinear elements as means of ameliorating the effects of actuator saturation in its coarse, main, and vernier stages. The linear controllers were implemented as high-order digital filters and were designed using Bode integral techniques to determine the loop shape. The nonlinear techniques encompass the areas of exact linearization, anti-windup control, nonlinear rate limiting and modal control. Details of the design procedure are given as well as data from the actual mechanism.

  14. An analysis of the effect of defect structures on catalytic surfaces by the boundary element technique

    NASA Astrophysics Data System (ADS)

    Peirce, Anthony P.; Rabitz, Herschel

    1988-08-01

    The boundary element (BE) technique is used to analyze the effect of defects on one-dimensional chemically active surfaces. The standard BE algorithm for diffusion is modified to include the effects of bulk desorption by making use of an asymptotic expansion technique to evaluate influences near boundaries and defect sites. An explicit time evolution scheme is proposed to treat the non-linear equations associated with defect sites. The proposed BE algorithm is shown to provide an efficient and convergent algorithm for modelling localized non-linear behavior. Since it exploits the actual Green's function of the linear diffusion-desorption process that takes place on the surface, the BE algorithm is extremely stable. The BE algorithm is applied to a number of interesting physical problems in which non-linear reactions occur at localized defects. The Lotka-Volterra system is considered in which the source, sink and predator-prey interaction terms are distributed at different defect sites in the domain and in which the defects are coupled by diffusion. This example provides a stringent test of the stability of the numerical algorithm. Marginal stability oscillations are analyzed for the Prigogine-Lefever reaction that occurs on a lattice of defects. Dissipative effects are observed for large perturbations to the marginal stability state, and rapid spatial reorganization of uniformly distributed initial perturbations is seen to take place. In another series of examples the effect of defect locations on the balance between desorptive processes on chemically active surfaces is considered. The effect of dynamic pulsing at various time-scales is considered for a one species reactive trapping model. Similar competitive behavior between neighboring defects previously observed for static adsorption levels is shown to persist for dynamic loading of the surface. The analysis of a more complex three species reaction process also provides evidence of competitive behavior between neighboring defect sites. The proposed BE algorithm is shown to provide a useful technique for analyzing the effect of defect sites on chemically active surfaces.

  15. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  16. Exploring the CAESAR database using dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Raymer, Michael L.

    2012-06-01

    The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.

  17. Study of non-linear deformation of vocal folds in simulations of human phonation

    NASA Astrophysics Data System (ADS)

    Saurabh, Shakti; Bodony, Daniel

    2014-11-01

    Direct numerical simulation is performed on a two-dimensional compressible, viscous fluid interacting with a non-linear, viscoelastic solid as a model for the generation of the human voice. The vocal fold (VF) tissues are modeled as multi-layered with varying stiffness in each layer and using a finite-strain Standard Linear Solid (SLS) constitutive model implemented in a quadratic finite element code and coupled to a high-order compressible Navier-Stokes solver through a boundary-fitted fluid-solid interface. The large non-linear mesh deformation is handled using an elliptic/poisson smoothening technique. Supra-glottal flow shows asymmetry in the flow, which in turn has a coupling effect on the motion of the VF. The fully compressible simulations gives direct insight into the sound produced as pressure distributions and the vocal fold deformation helps study the unsteady vortical flow resulting from the fluid-structure interaction along the full phonation cycle. Supported by the National Science Foundation (CAREER Award Number 1150439).

  18. Near-infrared Raman spectroscopy for estimating biochemical changes associated with different pathological conditions of cervix

    NASA Astrophysics Data System (ADS)

    Daniel, Amuthachelvi; Prakasarao, Aruna; Ganesan, Singaravelu

    2018-02-01

    The molecular level changes associated with oncogenesis precede the morphological changes in cells and tissues. Hence molecular level diagnosis would promote early diagnosis of the disease. Raman spectroscopy is capable of providing specific spectral signature of various biomolecules present in the cells and tissues under various pathological conditions. The aim of this work is to develop a non-linear multi-class statistical methodology for discrimination of normal, neoplastic and malignant cells/tissues. The tissues were classified as normal, pre-malignant and malignant by employing Principal Component Analysis followed by Artificial Neural Network (PC-ANN). The overall accuracy achieved was 99%. Further, to get an insight into the quantitative biochemical composition of the normal, neoplastic and malignant tissues, a linear combination of the major biochemicals by non-negative least squares technique was fit to the measured Raman spectra of the tissues. This technique confirms the changes in the major biomolecules such as lipids, nucleic acids, actin, glycogen and collagen associated with the different pathological conditions. To study the efficacy of this technique in comparison with histopathology, we have utilized Principal Component followed by Linear Discriminant Analysis (PC-LDA) to discriminate the well differentiated, moderately differentiated and poorly differentiated squamous cell carcinoma with an accuracy of 94.0%. And the results demonstrated that Raman spectroscopy has the potential to complement the good old technique of histopathology.

  19. The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models

    DTIC Science & Technology

    1988-07-27

    auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the

  20. A tutorial description of an interior point method and its applications to security-constrained economic dispatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vargas, L.S.; Quintana, V.H.; Vannelli, A.

    This paper deals with the use of Successive Linear Programming (SLP) for the solution of the Security-Constrained Economic Dispatch (SCED) problem. The authors tutorially describe an Interior Point Method (IPM) for the solution of Linear Programming (LP) problems, discussing important implementation issues that really make this method far superior to the simplex method. A study of the convergence of the SLP technique and a practical criterion to avoid oscillatory behavior in the iteration process are also proposed. A comparison of the proposed method with an efficient simplex code (MINOS) is carried out by solving SCED problems on two standard IEEEmore » systems. The results show that the interior point technique is reliable, accurate and more than two times faster than the simplex algorithm.« less

  1. LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL

    NASA Technical Reports Server (NTRS)

    Duke, E. L.

    1994-01-01

    The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.

  2. Using the Multiple-Matched-Sample and Statistical Controls to Examine the Effects of Magnet School Programs on the Reading and Mathematics Performance of Students

    ERIC Educational Resources Information Center

    Yang, Yu N.; Li, Yuan H.; Tompkins, Leroy J.; Modarresi, Shahpar

    2005-01-01

    This summative evaluation of magnet programs employed a quasi-experimental design to investigate whether or not students enrolled in magnet programs gained any achievement advantage over students who were not enrolled in a magnet program. Researchers used Zero-One Linear Programming to draw multiple sets of matched samples from the non-magnet…

  3. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  4. JPL Non-NASA Programs

    NASA Technical Reports Server (NTRS)

    Cox, Robert S.

    2006-01-01

    A viewgraph presentation describing JPL's non-NASA Programs is shown. The contents include: 1) JPL/Caltech: National Security Heritage; 2) Organization and Portfolio; 3) Synergistic Areas of Interest; 4) Business Environment; 5) National Space Community; 6) New Business Environment; 7) Technology Transfer Techniques; 8) Innovative Partnership Program (IPP); and 9) JPL's Track Record.

  5. Multi-dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs

    NASA Astrophysics Data System (ADS)

    Alias, Christophe; Darte, Alain; Feautrier, Paul; Gonnord, Laure

    Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.

  6. Detecting nonlinear dynamics of functional connectivity

    NASA Astrophysics Data System (ADS)

    LaConte, Stephen M.; Peltier, Scott J.; Kadah, Yasser; Ngan, Shing-Chung; Deshpande, Gopikrishna; Hu, Xiaoping

    2004-04-01

    Functional magnetic resonance imaging (fMRI) is a technique that is sensitive to correlates of neuronal activity. The application of fMRI to measure functional connectivity of related brain regions across hemispheres (e.g. left and right motor cortices) has great potential for revealing fundamental physiological brain processes. Primarily, functional connectivity has been characterized by linear correlations in resting-state data, which may not provide a complete description of its temporal properties. In this work, we broaden the measure of functional connectivity to study not only linear correlations, but also those arising from deterministic, non-linear dynamics. Here the delta-epsilon approach is extended and applied to fMRI time series. The method of delays is used to reconstruct the joint system defined by a reference pixel and a candidate pixel. The crux of this technique relies on determining whether the candidate pixel provides additional information concerning the time evolution of the reference. As in many correlation-based connectivity studies, we fix the reference pixel. Every brain location is then used as a candidate pixel to estimate the spatial pattern of deterministic coupling with the reference. Our results indicate that measured connectivity is often emphasized in the motor cortex contra-lateral to the reference pixel, demonstrating the suitability of this approach for functional connectivity studies. In addition, discrepancies with traditional correlation analysis provide initial evidence for non-linear dynamical properties of resting-state fMRI data. Consequently, the non-linear characterization provided from our approach may provide a more complete description of the underlying physiology and brain function measured by this type of data.

  7. Forest control and regulation ... a comparison of traditional methods and alternatives

    Treesearch

    LeRoy C. Hennes; Michael J. Irving; Daniel I. Navon

    1971-01-01

    Two traditional techniques of forest control and regulation-formulas and area-volume check-are compared to linear programing, as used in a new computerized planning system called Timber Resource Allocation Method ( Timber RAM). Inventory data from a National Forest in California illustrate how each technique is used. The traditional methods are simpler to apply and...

  8. Reduced-Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.

  9. Reduced Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of an RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.

  10. A steady and oscillatory kernel function method for interfering surfaces in subsonic, transonic and supersonic flow. [prediction analysis techniques for airfoils

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1976-01-01

    The theory, results and user instructions for an aerodynamic computer program are presented. The theory is based on linear lifting surface theory, and the method is the kernel function. The program is applicable to multiple interfering surfaces which may be coplanar or noncoplanar. Local linearization was used to treat nonuniform flow problems without shocks. For cases with imbedded shocks, the appropriate boundary conditions were added to account for the flow discontinuities. The data describing nonuniform flow fields must be input from some other source such as an experiment or a finite difference solution. The results are in the form of small linear perturbations about nonlinear flow fields. The method was applied to a wide variety of problems for which it is demonstrated to be significantly superior to the uniform flow method. Program user instructions are given for easy access.

  11. Applying new seismic analysis techniques to the lunar seismic dataset: New information about the Moon and planetary seismology on the eve of InSight

    NASA Astrophysics Data System (ADS)

    Dimech, J. L.; Weber, R. C.; Knapmeyer-Endrun, B.; Arnold, R.; Savage, M. K.

    2016-12-01

    The field of planetary science is poised for a major advance with the upcoming InSight mission to Mars due to launch in May 2018. Seismic analysis techniques adapted for use on planetary data are therefore highly relevant to the field. The heart of this project is in the application of new seismic analysis techniques to the lunar seismic dataset to learn more about the Moon's crust and mantle structure, with particular emphasis on `deep' moonquakes which are situated half-way between the lunar surface and its core with no surface expression. Techniques proven to work on the Moon might also be beneficial for InSight and future planetary seismology missions which face similar technical challenges. The techniques include: (1) an event-detection and classification algorithm based on `Hidden Markov Models' to reclassify known moonquakes and look for new ones. Apollo 17 gravimeter and geophone data will also be included in this effort. (2) Measurements of anisotropy in the lunar mantle and crust using `shear-wave splitting'. Preliminary measurements on deep moonquakes using the MFAST program are encouraging, and continued evaluation may reveal new structural information on the Moon's mantle. (3) Probabilistic moonquake locations using NonLinLoc, a non-linear hypocenter location technique, using a modified version of the codes designed to work with the Moon's radius. Successful application may provide a new catalog of moonquake locations with rigorous uncertainty information, which would be a valuable input into: (4) new fault plane constraints from focal mechanisms using a novel approach to Bayes' theorem which factor in uncertainties in hypocenter coordinates and S-P amplitude ratios. Preliminary results, such as shear-wave splitting measurements, will be presented and discussed.

  12. Response statistics of rotating shaft with non-linear elastic restoring forces by path integration

    NASA Astrophysics Data System (ADS)

    Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael

    2017-07-01

    Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.

  13. Probing the nuclear susceptibility of mesoionic compounds using two-beam coupling with chirp-controlled pulses

    NASA Astrophysics Data System (ADS)

    Bosco, Carlos A. C.; Maciel, Glauco S.; Rakov, Nikifor; de Araújo, Cid B.; Acioli, Lúcio H.; Simas, Alfredo M.; Athayde-Filho, Petrônio F.; Miller, Joseph

    2007-11-01

    The third-order non-linear optical response of mesoionic compounds (MIC) in dimethylsulfoxide (DMSO) and methanol solutions was investigated by use of collinear pump and probe technique with chirp-controlled femtosecond pulses. The experiments allowed the investigation of non-instantaneous nuclear processes and thermal effects induced by two-photon absorption (TPA). We found that the nuclear non-linearity of MIC in DMSO is ˜1/5 the benzene, which was used as a reference material. This result is attributed to the large inertia of MIC to rotation, compared to benzene. The results for MIC in methanol indicate the influence of thermal effects due to TPA.

  14. Effect of antimony (Sb) addition on the linear and non-linear optical properties of amorphous Ge-Te-Sb thin films

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Kaur, J.; Tripathi, S. K.; Sharma, I.

    2017-12-01

    Non-crystalline thin films of Ge20Te80-xSbx (x = 0, 2, 4, 6, 10) systems were deposited on glass substrate using thermal evaporation technique. The optical coefficients were accurately determined by transmission spectra using Swanepoel envelope method in the spectral region of 400-1600 nm. The refractive index was found to increase from 2.38 to 2.62 with the corresponding increase in Sb content over the entire spectral range. The dispersion of refractive index was discussed in terms of the single oscillator Wemple-DiDomenico model. Tauc relation for the allowed indirect transition showed decrease in optical band gap. To explore non-linearity, the spectral dependence of third order susceptibility of a-Ge-Te-Sb thin films was evaluated from change of index of refraction using Miller's rule. Susceptibility values were found to enhance rapidly from 10-13 to 10-12 (esu), with the red shift in the absorption edge. Non-linear refractive index was calculated by Fourier and Snitzer formula. The values were of the order of 10-12 esu. At telecommunication wavelength, these non-linear refractive index values showed three orders higher than that of silica glass. Dielectric constant and optical conductivity were also reported. The prepared Sb doped thin films on glass substrate with observed improved functional properties have a noble prospect in the application of nonlinear optical devices and might be used for a high speed communication fiber. Non-linear parameters showed good agreement with the values given in the literature.

  15. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors.

    PubMed

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2011-03-01

    We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  16. A robust approach to measuring the detective quantum efficiency of radiographic detectors in a clinical setting

    NASA Astrophysics Data System (ADS)

    McDonald, Michael C.; Kim, H. K.; Henry, J. R.; Cunningham, I. A.

    2012-03-01

    The detective quantum efficiency (DQE) is widely accepted as a primary measure of x-ray detector performance in the scientific community. A standard method for measuring the DQE, based on IEC 62220-1, requires the system to have a linear response meaning that the detector output signals are proportional to the incident x-ray exposure. However, many systems have a non-linear response due to characteristics of the detector, or post processing of the detector signals, that cannot be disabled and may involve unknown algorithms considered proprietary by the manufacturer. For these reasons, the DQE has not been considered as a practical candidate for routine quality assurance testing in a clinical setting. In this article we described a method that can be used to measure the DQE of both linear and non-linear systems that employ only linear image processing algorithms. The method was validated on a Cesium Iodide based flat panel system that simultaneously stores a raw (linear) and processed (non-linear) image for each exposure. It was found that the resulting DQE was equivalent to a conventional standards-compliant DQE with measurement precision, and the gray-scale inversion and linear edge enhancement did not affect the DQE result. While not IEC 62220-1 compliant, it may be adequate for QA programs.

  17. Prediction of Undsteady Flows in Turbomachinery Using the Linearized Euler Equations on Deforming Grids

    NASA Technical Reports Server (NTRS)

    Clark, William S.; Hall, Kenneth C.

    1994-01-01

    A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.

  18. Going virtual with quicktime VR: new methods and standardized tools for interactive dynamic visualization of anatomical structures.

    PubMed

    Trelease, R B; Nieder, G L; Dørup, J; Hansen, M S

    2000-04-15

    Continuing evolution of computer-based multimedia technologies has produced QuickTime, a multiplatform digital media standard that is supported by stand-alone commercial programs and World Wide Web browsers. While its core functions might be most commonly employed for production and delivery of conventional video programs (e.g., lecture videos), additional QuickTime VR "virtual reality" features can be used to produce photorealistic, interactive "non-linear movies" of anatomical structures ranging in size from microscopic through gross anatomic. But what is really included in QuickTime VR and how can it be easily used to produce novel and innovative visualizations for education and research? This tutorial introduces the QuickTime multimedia environment, its QuickTime VR extensions, basic linear and non-linear digital video technologies, image acquisition, and other specialized QuickTime VR production methods. Four separate practical applications are presented for light and electron microscopy, dissectable preserved specimens, and explorable functional anatomy in magnetic resonance cinegrams.

  19. SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    PubMed Central

    Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang

    2009-01-01

    The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569

  20. Hysteresis in column systems

    NASA Astrophysics Data System (ADS)

    Ivanyi, P.; Ivanyi, A.

    2015-02-01

    In this paper one column of a telescopic construction of a bell tower is investigated. The hinges at the support of the column and at the connecting joint between the upper and lower columns are modelled with rotational springs. The characteristics of the springs are assumed to be non-linear and the hysteresis property of them is represented with the Preisach hysteresis model. The mass of the columns and the bell with the fly are concentrated to the top of the column. The tolling process is simulated with a cycling load. The elements of the column are considered completely rigid. The time iteration of the non-linear equations of the motion is evaluated by the Crank-Nicolson schema and the implemented non-linear hysteresis is handled by the fix-point technique. The numerical simulation of the dynamic system is carried out under different combination of soft, medium and hard hysteresis properties of hinges.

  1. A Non-linear Geodetic Data Inversion Using ABIC for Slip Distribution on a Fault With an Unknown dip Angle

    NASA Astrophysics Data System (ADS)

    Fukahata, Y.; Wright, T. J.

    2006-12-01

    We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.

  2. The experimentation of LC7E learning model on the linear program material in terms of interpersonal intelligence on Wonogiri vocational school students

    NASA Astrophysics Data System (ADS)

    Antinah; Kusmayadi, T. A.; Husodo, B.

    2018-05-01

    This study aims to determine the effect of learning model on student achievement in terms of interpersonal intelligence. The compared learning models are LC7E and Direct learning model. This type of research is a quasi-experimental with 2x3 factorial design. The population in this study is a Grade XI student of Wonogiri Vocational Schools. The sample selection had done by stratified cluster random sampling. Data collection technique used questionnaires, documentation and tests. The data analysis technique used two different unequal cell variance analysis which previously conducted prerequisite analysis for balance test, normality test and homogeneity test. he conclusions of this research are: 1) student learning achievement of mathematics given by LC7E learning model is better when compared with direct learning; 2) Mathematics learning achievement of students who have a high level of interpersonal intelligence is better than students with interpersonal intelligence in medium and low level. Students' mathematics learning achievement with interpersonal level of intelligence is better than those with low interpersonal intelligence on linear programming; 3) LC7E learning model resulted better on mathematics learning achievement compared with direct learning model for each category of students’ interpersonal intelligence level on linear program material.

  3. The experimentation of LC7E learning model on the linear program material in terms of interpersonal intelligence on Wonogiri Vocational School students

    NASA Astrophysics Data System (ADS)

    Antinah; Kusmayadi, T. A.; Husodo, B.

    2018-03-01

    This study aimed to determine the effect of learning model on student achievement in terms of interpersonal intelligence. The compared learning models are LC7E and Direct learning model. This type of research is a quasi-experimental with 2x3 factorial design. The population in this study is a Grade XI student of Wonogiri Vocational Schools. The sample selection had done by stratified cluster random sampling. Data collection technique used questionnaires, documentation and tests. The data analysis technique used two different unequal cell variance analysis which previously conducted prerequisite analysis for balance test, normality test and homogeneity test. he conclusions of this research are: 1) student learning achievement of mathematics given by LC7E learning model is better when compared with direct learning; 2) Mathematics learning achievement of students who have a high level of interpersonal intelligence is better than students with interpersonal intelligence in medium and low level. Students’ mathematics learning achievement with interpersonal level of intelligence is better than those with low interpersonal intelligence on linear programming; 3) LC7E learning model resulted better on mathematics learning achievement compared with direct learning model for each category of students’ interpersonal intelligence level on linear program material.

  4. Constructivist Approach to Teacher Education: An Integrative Model for Reflective Teaching

    ERIC Educational Resources Information Center

    Vijaya Kumari, S. N.

    2014-01-01

    The theory of constructivism states that learning is non-linear, recursive, continuous, complex and relational--Despite the difficulty of deducing constructivist pedagogy from constructivist theories, there are models and common elements to consider in planning new program. Reflective activities are a common feature of all the programs of…

  5. Rapid determination of thermodynamic parameters from one-dimensional programmed-temperature gas chromatography for use in retention time prediction in comprehensive multidimensional chromatography.

    PubMed

    McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J

    2014-01-17

    A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Learning Game Evaluation Functions with a Compound Linear Machine.

    DTIC Science & Technology

    1980-03-01

    Comparison to Non-Learning Shannon Type Programs . . . 50 Comparison to Samuel’s Shannon Type Checker Program . 52 Comparison to an Advice-Taking Shannon...examples of programs or algorithms that play games. The most significant of these is usually held to be A. Samuel’s checker playing program because it is...his checker playing program (GRC, 1978:54-72). Another related study nl,. { .. . performed for the Air Force recommends researching computerized

  7. Trend extraction using empirical mode decomposition and statistical empirical mode decomposition: Case study: Kuala Lumpur stock market

    NASA Astrophysics Data System (ADS)

    Jaber, Abobaker M.

    2014-12-01

    Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.

  8. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    NASA Astrophysics Data System (ADS)

    Zhu, Z. W.; Zhang, W. D.; Xu, J.

    2014-03-01

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.

  9. Synthesis, characterisation and optical studies of new tetraethyl- rubyrin-graphene oxide covalent adducts

    NASA Astrophysics Data System (ADS)

    Garg, Kavita; Shanmugam, Ramakrishanan; Ramamurthy, Praveen C.

    2018-02-01

    Tetrathia-rubyrin and graphene oxide (GO) covalent adduct was synthesized, characterised and optical properties were studied. GO-Rubyrin adducts showed fluorescence quenching of rubyrin due to electron or energy transfer from rubyrin to graphene oxide, which also reflected in UV-vis absorbance spectroscopy. The non-linear optical responses were measured through Z scan technique in nano-second regime. The enhanced optical non-linearity was observed after attachment of GO with rubyrin, can be ascribed to the photo-induced electron or energy transfer from the electron rich rubyrin moiety to the electron deficient GO.

  10. Study of coherent synchrotron radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    NASA Astrophysics Data System (ADS)

    Dattoli, G.; Migliorati, M.; Schiavi, A.

    2007-05-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.

  11. Optimizing Requirements Decisions with KEYS

    NASA Technical Reports Server (NTRS)

    Jalali, Omid; Menzies, Tim; Feather, Martin

    2008-01-01

    Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.

  12. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation.

    PubMed

    Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco

    2017-10-01

    The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was <4 PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.

  13. Hypervideo.

    ERIC Educational Resources Information Center

    Locatis, Craig; And Others

    1990-01-01

    Discusses methods for incorporating video into hypermedia programs. Knowledge representation in hypermedia is explained; video production techniques are discussed; comparisons between linear video, interactive video, and hypervideo are presented; appropriate conditions for hypervideo use are examined; and a need for new media research is…

  14. Using Modern C++ Idiom for the Discretisation of Sets of Coupled Transport Equations in Numerical Plasma Physics

    NASA Astrophysics Data System (ADS)

    van Dijk, Jan; Hartgers, Bart; van der Mullen, Joost

    2006-10-01

    Self-consistent modelling of plasma sources requires a simultaneous treatment of multiple physical phenomena. As a result plasma codes have a high degree of complexity. And with the growing interest in time-dependent modelling of non-equilibrium plasma in three dimensions, codes tend to become increasingly hard to explain-and-maintain. As a result of these trends there has been an increased interest in the software-engineering and implementation aspects of plasma modelling in our group at Eindhoven University of Technology. In this contribution we will present modern object-oriented techniques in C++ to solve an old problem: that of the discretisation of coupled linear(ized) equations involving multiple field variables on ortho-curvilinear meshes. The `LinSys' code has been tailored to the transport equations that occur in transport physics. The implementation has been made both efficient and user-friendly by using modern idiom like expression templates and template meta-programming. Live demonstrations will be given. The code is available to interested parties; please visit www.dischargemodelling.org.

  15. Low-amplitude non-linear volume vibrations of single microbubbles measured with an "acoustical camera".

    PubMed

    Renaud, Guillaume; Bosch, Johan G; Van Der Steen, Antonius F W; De Jong, Nico

    2014-06-01

    Contrast-enhanced ultrasound imaging is based on the detection of non-linear vibrational responses of a contrast agent after its intravenous administration. Improving contrast-enhanced images requires an accurate understanding of the vibrational response to ultrasound of the lipid-coated gas microbubbles that constitute most ultrasound contrast agents. Variations in the volume of microbubbles provide the most efficient radiation of ultrasound and, therefore, are the most important bubble vibrations for medical diagnostic ultrasound imaging. We developed an "acoustical camera" that measures the dynamic volume change of individual microbubbles when excited by a pressure wave. In the work described here, the technique was applied to the characterization of low-amplitude non-linear behaviors of BR14 microbubbles (Bracco Research, Geneva, Switzerland). The amplitude dependence of the resonance frequency and the damping, the prevalence of efficient subharmonic and ultraharmonic vibrations and the amplitude dependence of the response at the fundamental frequency and at the second harmonic frequency were investigated. Because of the large number of measurements, we provide a statistical characterization of the low-amplitude non-linear properties of the contrast agent. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  16. Automatic control: the vertebral column of dogfish sharks behaves as a continuously variable transmission with smoothly shifting functions.

    PubMed

    Porter, Marianne E; Ewoldt, Randy H; Long, John H

    2016-09-15

    During swimming in dogfish sharks, Squalus acanthias, both the intervertebral joints and the vertebral centra undergo significant strain. To investigate this system, unique among vertebrates, we cyclically bent isolated segments of 10 vertebrae and nine joints. For the first time in the biomechanics of fish vertebral columns, we simultaneously characterized non-linear elasticity and viscosity throughout the bending oscillation, extending recently proposed techniques for large-amplitude oscillatory shear (LAOS) characterization to large-amplitude oscillatory bending (LAOB). The vertebral column segments behave as non-linear viscoelastic springs. Elastic properties dominate for all frequencies and curvatures tested, increasing as either variable increases. Non-linearities within a bending cycle are most in evidence at the highest frequency, 2.0 Hz, and curvature, 5 m -1 Viscous bending properties are greatest at low frequencies and high curvatures, with non-linear effects occurring at all frequencies and curvatures. The range of mechanical behaviors includes that of springs and brakes, with smooth transitions between them that allow for continuously variable power transmission by the vertebral column to assist in the mechanics of undulatory propulsion. © 2016. Published by The Company of Biologists Ltd.

  17. Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool

    NASA Astrophysics Data System (ADS)

    Chakraborty, Monisha; Ghosh, Dipak

    2017-12-01

    Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.

  18. Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool

    NASA Astrophysics Data System (ADS)

    Chakraborty, Monisha; Ghosh, Dipak

    2018-04-01

    Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.

  19. Synthesis, crystal growth and studies on non-linear optical property of new chalcones

    NASA Astrophysics Data System (ADS)

    Sarojini, B. K.; Narayana, B.; Ashalatha, B. V.; Indira, J.; Lobo, K. G.

    2006-09-01

    The synthesis, crystal growth and non-linear optical (NLO) property of new chalcone derivatives are reported. 4-Propyloxy and 4-butoxy benzaldehydes were made to under go Claisen-Schmidt condensation with 4-methoxy, 4-nitro and 4-phenoxy acetophenones to form corresponding chalcones. The newly synthesized compounds were characterized by analytical and spectral data. The Second harmonic generation (SHG) efficiency of these compounds was measured by powder technique using Nd:YAG laser. Among tested compounds three chalcones showed NLO property. The chalcone 1-(4-methoxyphenyl)-3-(4-propyloxy phenyl)-2-propen-1-one exhibited SHG conversion efficiency 2.7 times that of urea. The bulk crystal of 1-(4-methoxyphenyl)-3-(4-butoxyphenyl)-2-propen-1-one (crystal size 65×28×15 mm 3) was grown by slow-evaporation technique from acetone. Microhardness of the crystal was tested by Vicker's microhardness method.

  20. Shape component analysis: structure-preserving dimension reduction on biological shape spaces.

    PubMed

    Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge

    2016-03-01

    Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    NASA Astrophysics Data System (ADS)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  2. Robust and efficient pharmacokinetic parameter non-linear least squares estimation for dynamic contrast enhanced MRI of the prostate.

    PubMed

    Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J

    2018-05-01

    To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. On a difficulty in eigenfunction expansion solutions for the start-up of fluid flow

    NASA Astrophysics Data System (ADS)

    Christov, Ivan C.

    2015-11-01

    Most mathematics and engineering textbooks describe the process of ``subtracting off'' the steady state of a linear parabolic partial differential equation as a technique for obtaining a boundary-value problem with homogeneous boundary conditions that can be solved by separation of variables (i.e., eigenfunction expansions). While this method produces the correct solution for the start-up of the flow of, e.g., a Newtonian fluid between parallel plates, it can lead to erroneous solutions to the corresponding problem for a class of non-Newtonian fluids. We show that the reason for this is the non-rigorous enforcement of the start-up condition in the textbook approach, which leads to a violation of the principle of causality. Nevertheless, these boundary-value problems can be solved correctly using eigenfunction expansions, and we present the formulation that makes this possible (in essence, an application of Duhamel's principle). The solutions obtained by this new approach are shown to agree identically with those obtained by using the Laplace transform in time only, a technique that enforces the proper start-up condition implicitly (hence, the same error cannot be committed). Supported, in part, by NSF Grant DMS-1104047 and the U.S. DOE (Contract No. DE-AC52-06NA25396) through the LANL/LDRD Program.

  4. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  5. Linear beam raster magnet driver based on H-bridge technique

    DOEpatents

    Sinkine, Nikolai I.; Yan, Chen; Apeldoorn, Cornelis; Dail, Jeffrey Glenn; Wojcik, Randolph Frank; Gunning, William

    2006-06-06

    An improved raster magnet driver for a linear particle beam is based on an H-bridge technique. Four branches of power HEXFETs form a two-by-two switch. Switching the HEXFETs in a predetermined order and at the right frequency produces a triangular current waveform. An H-bridge controller controls switching sequence and timing. The magnetic field of the coil follows the shape of the waveform and thus steers the beam using a triangular rather than a sinusoidal waveform. The system produces a raster pattern having a highly uniform raster density distribution, eliminates target heating from non-uniform raster density distributions, and produces higher levels of beam current.

  6. A computer graphics display and data compression technique

    NASA Technical Reports Server (NTRS)

    Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)

    1974-01-01

    The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.

  7. Comparison of some optimal control methods for the design of turbine blades

    NASA Technical Reports Server (NTRS)

    Desilva, B. M. E.; Grant, G. N. C.

    1977-01-01

    This paper attempts a comparative study of some numerical methods for the optimal control design of turbine blades whose vibration characteristics are approximated by Timoshenko beam idealizations with shear and incorporating simple boundary conditions. The blade was synthesized using the following methods: (1) conjugate gradient minimization of the system Hamiltonian in function space incorporating penalty function transformations, (2) projection operator methods in a function space which includes the frequencies of vibration and the control function, (3) epsilon-technique penalty function transformation resulting in a highly nonlinear programming problem, (4) finite difference discretization of the state equations again resulting in a nonlinear program, (5) second variation methods with complex state differential equations to include damping effects resulting in systems of inhomogeneous matrix Riccatti equations some of which are stiff, (6) quasi-linear methods based on iterative linearization of the state and adjoint equation. The paper includes a discussion of some substantial computational difficulties encountered in the implementation of these techniques together with a resume of work presently in progress using a differential dynamic programming approach.

  8. A FORTRAN program for the analysis of linear continuous and sample-data systems

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.

    1976-01-01

    A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.

  9. A robust optimization methodology for preliminary aircraft design

    NASA Astrophysics Data System (ADS)

    Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.

    2016-05-01

    This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.

  10. An application of nonlinear programming to the design of regulators of a linear-quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a nonlinear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer. One concerns helicopter longitudinal dynamics and the other the flight dynamics of an aerodynamically unstable aircraft.

  11. A morphological perceptron with gradient-based learning for Brazilian stock market forecasting.

    PubMed

    Araújo, Ricardo de A

    2012-04-01

    Several linear and non-linear techniques have been proposed to solve the stock market forecasting problem. However, a limitation arises from all these techniques and is known as the random walk dilemma (RWD). In this scenario, forecasts generated by arbitrary models have a characteristic one step ahead delay with respect to the time series values, so that, there is a time phase distortion in stock market phenomena reconstruction. In this paper, we propose a suitable model inspired by concepts in mathematical morphology (MM) and lattice theory (LT). This model is generically called the increasing morphological perceptron (IMP). Also, we present a gradient steepest descent method to design the proposed IMP based on ideas from the back-propagation (BP) algorithm and using a systematic approach to overcome the problem of non-differentiability of morphological operations. Into the learning process we have included a procedure to overcome the RWD, which is an automatic correction step that is geared toward eliminating time phase distortions that occur in stock market phenomena. Furthermore, an experimental analysis is conducted with the IMP using four complex non-linear problems of time series forecasting from the Brazilian stock market. Additionally, two natural phenomena time series are used to assess forecasting performance of the proposed IMP with other non financial time series. At the end, the obtained results are discussed and compared to results found using models recently proposed in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Time-domain induced polarization - an analysis of Cole-Cole parameter resolution and correlation using Markov Chain Monte Carlo inversion

    NASA Astrophysics Data System (ADS)

    Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest

    2017-12-01

    The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.

  13. Combined solvent- and non-uniform temperature-programmed gradient liquid chromatography. I - A theoretical investigation.

    PubMed

    Gritti, Fabrice

    2016-11-18

    An new class of gradient liquid chromatography (GLC) is proposed and its performance is analyzed from a theoretical viewpoint. During the course of such gradients, both the solvent strength and the column temperature are simultaneously changed in time and space. The solvent and temperature gradients propagate along the chromatographic column at their own and independent linear velocity. This class of gradient is called combined solvent- and temperature-programmed gradient liquid chromatography (CST-GLC). The general expressions of the retention time, retention factor, and of the temporal peak width of the analytes at elution in CST-GLC are derived for linear solvent strength (LSS) retention models, modified van't Hoff retention behavior, linear and non-distorted solvent gradients, and for linear temperature gradients. In these conditions, the theory predicts that CST-GLC is equivalent to a unique and apparent dynamic solvent gradient. The apparent solvent gradient steepness is the sum of the solvent and temperature steepness. The apparent solvent linear velocity is the reciprocal of the steepness-averaged sum of the reciprocal of the actual solvent and temperature linear velocities. The advantage of CST-GLC over conventional GLC is demonstrated for the resolution of protein digests (peptide mapping) when applying smooth, retained, and linear acetonitrile gradients in combination with a linear temperature gradient (from 20°C to 90°C) using 300μm×150mm capillary columns packed with sub-2 μm particles. The benefit of CST-GLC is demonstrated when the temperature gradient propagates at the same velocity as the chromatographic speed. The experimental proof-of-concept for the realization of temperature ramps propagating at a finite and constant linear velocity is also briefly described. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Optimization of Energy Efficiency and Conservation in Green Building Design Using Duelist, Killer-Whale and Rain-Water Algorithms

    NASA Astrophysics Data System (ADS)

    Biyanto, T. R.; Matradji; Syamsi, M. N.; Fibrianto, H. Y.; Afdanny, N.; Rahman, A. H.; Gunawan, K. S.; Pratama, J. A. D.; Malwindasari, A.; Abdillah, A. I.; Bethiana, T. N.; Putra, Y. A.

    2017-11-01

    The development of green building has been growing in both design and quality. The development of green building was limited by the issue of expensive investment. Actually, green building can reduce the energy usage inside the building especially in utilization of cooling system. External load plays major role in reducing the usage of cooling system. External load is affected by type of wall sheathing, glass and roof. The proper selection of wall, type of glass and roof material are very important to reduce external load. Hence, the optimization of energy efficiency and conservation in green building design is required. Since this optimization consist of integer and non-linear equations, this problem falls into Mixed-Integer-Non-Linear-Programming (MINLP) that required global optimization technique such as stochastic optimization algorithms. In this paper the optimized variables i.e. type of glass and roof were chosen using Duelist, Killer-Whale and Rain-Water Algorithms to obtain the optimum energy and considering the minimal investment. The optimization results exhibited the single glass Planibel-G with the 3.2 mm thickness and glass wool insulation provided maximum ROI of 36.8486%, EUI reduction of 54 kWh/m2·year, CO2 emission reduction of 486.8971 tons/year and reduce investment of 4,078,905,465 IDR.

  15. Aerodynamic preliminary analysis system. Part 1: Theory. [linearized potential theory

    NASA Technical Reports Server (NTRS)

    Bonner, E.; Clever, W.; Dunn, K.

    1978-01-01

    A comprehensive aerodynamic analysis program based on linearized potential theory is described. The solution treats thickness and attitude problems at subsonic and supersonic speeds. Three dimensional configurations with or without jet flaps having multiple non-planar surfaces of arbitrary planform and open or closed slender bodies of non-circular contour may be analyzed. Longitudinal and lateral-directional static and rotary derivative solutions may be generated. The analysis was implemented on a time sharing system in conjunction with an input tablet digitizer and an interactive graphics input/output display and editing terminal to maximize its responsiveness to the preliminary analysis problem. Nominal case computation time of 45 CPU seconds on the CDC 175 for a 200 panel simulation indicates the program provides an efficient analysis for systematically performing various aerodynamic configuration tradeoff and evaluation studies.

  16. Study on for soluble solids contents measurement of grape juice beverage based on Vis/NIRS and chemomtrics

    NASA Astrophysics Data System (ADS)

    Wu, Di; He, Yong

    2007-11-01

    The aim of this study is to investigate the potential of the visible and near infrared spectroscopy (Vis/NIRS) technique for non-destructive measurement of soluble solids contents (SSC) in grape juice beverage. 380 samples were studied in this paper. Smoothing way of Savitzky-Golay and standard normal variate were applied for the pre-processing of spectral data. Least-squares support vector machines (LS-SVM) with RBF kernel function was applied to developing the SSC prediction model based on the Vis/NIRS absorbance data. The determination coefficient for prediction (Rp2) of the results predicted by LS-SVM model was 0. 962 and root mean square error (RMSEP) was 0. 434137. It is concluded that Vis/NIRS technique can quantify the SSC of grape juice beverage fast and non-destructively.. At the same time, LS-SVM model was compared with PLS and back propagation neural network (BP-NN) methods. The results showed that LS-SVM was superior to the conventional linear and non-linear methods in predicting SSC of grape juice beverage. In this study, the generation ability of LS-SVM, PLS and BP-NN models were also investigated. It is concluded that LS-SVM regression method is a promising technique for chemometrics in quantitative prediction.

  17. Formal methods for modeling and analysis of hybrid systems

    NASA Technical Reports Server (NTRS)

    Tiwari, Ashish (Inventor); Lincoln, Patrick D. (Inventor)

    2009-01-01

    A technique based on the use of a quantifier elimination decision procedure for real closed fields and simple theorem proving to construct a series of successively finer qualitative abstractions of hybrid automata is taught. The resulting abstractions are always discrete transition systems which can then be used by any traditional analysis tool. The constructed abstractions are conservative and can be used to establish safety properties of the original system. The technique works on linear and non-linear polynomial hybrid systems: the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. An exemplar tool in the SAL environment built over the theorem prover PVS is detailed. The technique scales well to large and complex hybrid systems.

  18. Medical Expenditures among Immigrant and Non-Immigrant Groups in the U.S.: Findings from the Medical Expenditures Panel Survey (2000–2008)

    PubMed Central

    Tarraf, Wassim; Miranda, Patricia Y.; González, Hector M.

    2011-01-01

    Objective To examine time trends and differences in medical expenditures between non-citizens, foreign-born, and U.S.-born citizens. Methods We used multi-year Medical Expenditures Panel Survey (2000–2008) data on non-institutionalized adults in the U.S. (N=190,965). Source specific and total medical expenditures were analyzed using regression models, bootstrap prediction techniques, and linear and non-linear decomposition methods to evaluate the relationship between immigration status and expenditures, controlling for confounding effects. Results We found that the average health expenditures between 2000 and 2008 for non-citizens immigrants ($1,836) were substantially lower compared to both foreign-born ($3,737) and U.S.-born citizens ($4,478). Differences were maintained after controlling for confounding effects. Decomposition techniques showed that the main determinants of these differences were the availability of a usual source of healthcare, insurance, and ethnicity/race. Conclusion Lower healthcare expenditures among immigrants result from disparate access to healthcare. The dissipation of demographic advantages among immigrants could prospectively produce higher pressures on the U.S. healthcare system as immigrants age and levels of chronic conditions rise. Barring a shift in policy, the brunt of the effects could be borne by an already overextended public healthcare system. PMID:22222383

  19. Progress in Studying Scintillator Proportionality: Phenomenological Model

    NASA Astrophysics Data System (ADS)

    Bizarri, G.; Cherepy, N. J.; Choong, W. S.; Hull, G.; Moses, W. W.; Payne, S. A.; Singh, J.; Valentine, J. D.; Vasilev, A. N.; Williams, R. T.

    2009-08-01

    We present a model to describe the origin of non-proportional dependence of scintillator light yield on the energy of an ionizing particle. The non-proportionality is discussed in terms of energy relaxation channels and their linear and non-linear dependences on the deposited energy. In this approach, the scintillation response is described as a function of the deposited energy deposition and the kinetic rates of each relaxation channel. This mathematical framework allows both a qualitative interpretation and a quantitative fitting representation of scintillation non-proportionality response as function of kinetic rates. This method was successfully applied to thallium doped sodium iodide measured with SLYNCI, a new facility using the Compton coincidence technique. Finally, attention is given to the physical meaning of the dominant relaxation channels, and to the potential causes responsible for the scintillation non-proportionality. We find that thallium doped sodium iodide behaves as if non-proportionality is due to competition between radiative recombinations and non-radiative Auger processes.

  20. Simultaneous multiple non-crossing quantile regression estimation using kernel constraints

    PubMed Central

    Liu, Yufeng; Wu, Yichao

    2011-01-01

    Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842

  1. Thermal analyses of the International Ultraviolet Explorer (IUE) scientific instrument using the NASTRAN thermal analyzer (NTA): A general purpose summary

    NASA Technical Reports Server (NTRS)

    Jackson, C. E., Jr.

    1976-01-01

    The NTA Level 15.5.2/3, was used to provide non-linear steady-state (NLSS) and non-linear transient (NLTR) thermal predictions for the International Ultraviolet Explorer (IUE) Scientific Instrument (SI). NASTRAN structural models were used as the basis for the thermal models, which were produced by a straight forward conversion procedure. The accuracy of this technique was sub-sequently demonstrated by a comparison of NTA predicts with the results of a thermal vacuum test of the IUE Engineering Test Unit (ETU). Completion of these tasks was aided by the use of NTA subroutines.

  2. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  3. Dynamic acousto-elastic testing of concrete with a coda-wave probe: comparison with standard linear and nonlinear ultrasonic techniques.

    PubMed

    Shokouhi, Parisa; Rivière, Jacques; Lake, Colton R; Le Bas, Pierre-Yves; Ulrich, T J

    2017-11-01

    The use of nonlinear acoustic techniques in solids consists in measuring wave distortion arising from compliant features such as cracks, soft intergrain bonds and dislocations. As such, they provide very powerful nondestructive tools to monitor the onset of damage within materials. In particular, a recent technique called dynamic acousto-elasticity testing (DAET) gives unprecedented details on the nonlinear elastic response of materials (classical and non-classical nonlinear features including hysteresis, transient elastic softening and slow relaxation). Here, we provide a comprehensive set of linear and nonlinear acoustic responses on two prismatic concrete specimens; one intact and one pre-compressed to about 70% of its ultimate strength. The two linear techniques used are Ultrasonic Pulse Velocity (UPV) and Resonance Ultrasound Spectroscopy (RUS), while the nonlinear ones include DAET (fast and slow dynamics) as well as Nonlinear Resonance Ultrasound Spectroscopy (NRUS). In addition, the DAET results correspond to a configuration where the (incoherent) coda portion of the ultrasonic record is used to probe the samples, as opposed to a (coherent) first arrival wave in standard DAET tests. We find that the two visually identical specimens are indistinguishable based on parameters measured by linear techniques (UPV and RUS). On the contrary, the extracted nonlinear parameters from NRUS and DAET are consistent and orders of magnitude greater for the damaged specimen than those for the intact one. This compiled set of linear and nonlinear ultrasonic testing data including the most advanced technique (DAET) provides a benchmark comparison for their use in the field of material characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Burn-injured tissue detection for debridement surgery through the combination of non-invasive optical imaging techniques.

    PubMed

    Heredia-Juesas, Juan; Thatcher, Jeffrey E; Lu, Yang; Squiers, John J; King, Darlene; Fan, Wensheng; DiMaio, J Michael; Martinez-Lorenzo, Jose A

    2018-04-01

    The process of burn debridement is a challenging technique requiring significant skills to identify the regions that need excision and their appropriate excision depths. In order to assist surgeons, a machine learning tool is being developed to provide a quantitative assessment of burn-injured tissue. This paper presents three non-invasive optical imaging techniques capable of distinguishing four kinds of tissue-healthy skin, viable wound bed, shallow burn, and deep burn-during serial burn debridement in a porcine model. All combinations of these three techniques have been studied through a k-fold cross-validation method. In terms of global performance, the combination of all three techniques significantly improves the classification accuracy with respect to just one technique, from 0.42 up to more than 0.76. Furthermore, a non-linear spatial filtering based on the mode of a small neighborhood has been applied as a post-processing technique, in order to improve the performance of the classification. Using this technique, the global accuracy reaches a value close to 0.78 and, for some particular tissues and combination of techniques, the accuracy improves by 13%.

  5. Joint Services Electronics Program.

    DTIC Science & Technology

    1983-09-30

    environment. The research is under three interrelated heads: (1) algebraic Methodologies for Control Systems design , both linear and non -linear, (2) robust...properties of the device. After study of these experimental results, we plan to design a millimeter- wave version of the Gunn device. This will...appropriate dose discretization level for an adju- stable width beam. 2) Experimental Device Fabrication In a collaborative effort with the IC design group

  6. Understanding of Materials State and its Degradation using Non-Linear Ultrasound (NLU) Approaches

    DTIC Science & Technology

    2011-01-01

    Traditional ultrasonic NDE is based on linear theory and normally relies on measuring some particular parameter (sound velocity , attenuation... velocity in the material. In most cases this technique is not considered to be very practical as very small changes in velocity has to be measured. Hence...nonlinear elasticity) of the material the input wave distorts as it propagates. This is attributed to the difference in the wave velocities of the

  7. Linear Chord Diagrams with Long Chords

    NASA Astrophysics Data System (ADS)

    Sullivan, Everett

    A linear chord diagram of size n is a partition of the first 2n integers into sets of size two. These diagrams appear in many different contexts in combinatorics and other areas of mathematics, particularly knot theory. We explore various constraints that produce diagrams which have no short chords. A number of patterns appear from the results of these constraints which we can prove using techniques ranging from explicit bijections to non-commutative algebra.

  8. SLC: The End Game

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raimondi, Pantaleo

    The design of the Stanford Linear Collider (SLC) called for a beam intensity far beyond what was practically achievable. This was due to intrinsic limitations in many subsystems and to a lack of understanding of the new physics of linear colliders. Real progress in improving the SLC performance came from precision, non-invasive diagnostics to measure and monitor the beams and from new techniques to control the emittance dilution and optimize the beams. A major contribution to the success of the last 1997-98 SLC run came from several innovative ideas for improving the performance of the Final Focus (FF). This papermore » describes some of the problems encountered and techniques used to overcome them. Building on the SLC experience, we will also present a new approach to the FF design for future high energy linear colliders.« less

  9. Automata-Based Verification of Temporal Properties on Running Programs

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)

    2001-01-01

    This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  10. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com

    2014-03-15

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less

  11. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.

  12. The Development of Program for Enhancing Learning Management Competency of Teachers in Non-Formal and Informal Education Centers

    ERIC Educational Resources Information Center

    Jutasong, Chanokpon; Sirisuthi, Chaiyut; Phusri-on, Songsak

    2016-01-01

    The objectives of this research are: 1) to study factors and indicators, 2) to study current situations, desirable situations and techniques, 3) to develop the Program, and 4) to study the effect of Program. It comprised 4 phases: (1) studying the factors and indicators; (2) studying the current situations, desirable situations and techniques; (3)…

  13. Electro-Optic Beam Steering Using Non-Linear Organic Materials

    DTIC Science & Technology

    1993-08-01

    York (SUNY), Buffalo, for potential application to the Hughes electro - optic beam deflector device. Evaluations include electro - optic coefficient...response time, transmission, and resistivity. Electro - optic coefficient measurements were made at 633 nm using a simple reflection technique. The

  14. A Large-Particle Monte Carlo Code for Simulating Non-Linear High-Energy Processes Near Compact Objects

    NASA Technical Reports Server (NTRS)

    Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek

    1995-01-01

    High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.

  15. Multidimensional custom-made non-linear microscope: from ex-vivo to in-vivo imaging

    NASA Astrophysics Data System (ADS)

    Cicchi, R.; Sacconi, L.; Jasaitis, A.; O'Connor, R. P.; Massi, D.; Sestini, S.; de Giorgi, V.; Lotti, T.; Pavone, F. S.

    2008-09-01

    We have built a custom-made multidimensional non-linear microscope equipped with a combination of several non-linear laser imaging techniques involving fluorescence lifetime, multispectral two-photon and second-harmonic generation imaging. The optical system was mounted on a vertical honeycomb breadboard in an upright configuration, using two galvo-mirrors relayed by two spherical mirrors as scanners. A double detection system working in non-descanning mode has allowed both photon counting and a proportional regime. This experimental setup offering high spatial (micrometric) and temporal (sub-nanosecond) resolution has been used to image both ex-vivo and in-vivo biological samples, including cells, tissues, and living animals. Multidimensional imaging was used to spectroscopically characterize human skin lesions, as malignant melanoma and naevi. Moreover, two-color detection of two photon excited fluorescence was applied to in-vivo imaging of living mice intact neocortex, as well as to induce neuronal microlesions by femtosecond laser burning. The presented applications demonstrate the capability of the instrument to be used in a wide range of biological and biomedical studies.

  16. Automating approximate Bayesian computation by local linear regression.

    PubMed

    Thornton, Kevin R

    2009-07-07

    In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.

  17. A novel technique to solve nonlinear higher-index Hessenberg differential-algebraic equations by Adomian decomposition method.

    PubMed

    Benhammouda, Brahim

    2016-01-01

    Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.

  18. A microcomputer program for analysis of nucleic acid hybridization data

    PubMed Central

    Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.

    1982-01-01

    The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017

  19. An object-oriented computational model to study cardiopulmonary hemodynamic interactions in humans.

    PubMed

    Ngo, Chuong; Dahlmanns, Stephan; Vollmer, Thomas; Misgeld, Berno; Leonhardt, Steffen

    2018-06-01

    This work introduces an object-oriented computational model to study cardiopulmonary interactions in humans. Modeling was performed in object-oriented programing language Matlab Simscape, where model components are connected with each other through physical connections. Constitutive and phenomenological equations of model elements are implemented based on their non-linear pressure-volume or pressure-flow relationship. The model includes more than 30 physiological compartments, which belong either to the cardiovascular or respiratory system. The model considers non-linear behaviors of veins, pulmonary capillaries, collapsible airways, alveoli, and the chest wall. Model parameters were derisved based on literature values. Model validation was performed by comparing simulation results with clinical and animal data reported in literature. The model is able to provide quantitative values of alveolar, pleural, interstitial, aortic and ventricular pressures, as well as heart and lung volumes during spontaneous breathing and mechanical ventilation. Results of baseline simulation demonstrate the consistency of the assigned parameters. Simulation results during mechanical ventilation with PEEP trials can be directly compared with animal and clinical data given in literature. Object-oriented programming languages can be used to model interconnected systems including model non-linearities. The model provides a useful tool to investigate cardiopulmonary activity during spontaneous breathing and mechanical ventilation. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. A holistic approach to movement education in sport and fitness: a systems based model.

    PubMed

    Polsgrove, Myles Jay

    2012-01-01

    The typical model used by movement professionals to enhance performance relies on the notion that a linear increase in load results in steady and progressive gains, whereby, the greater the effort, the greater the gains in performance. Traditional approaches to movement progression typically rely on the proper sequencing of extrinsically based activities to facilitate the individual in reaching performance objectives. However, physical rehabilitation or physical performance rarely progresses in such a linear fashion; instead they tend to evolve non-linearly and rather unpredictably. A dynamic system can be described as an entity that self-organizes into increasingly complex forms. Applying this view to the human body, practitioners could facilitate non-linear performance gains through a systems based programming approach. Utilizing a dynamic systems view, the Holistic Approach to Movement Education (HADME) is a model designed to optimize performance by accounting for non-linear and self-organizing traits associated with human movement. In this model, gains in performance occur through advancing individual perspectives and through optimizing sub-system performance. This inward shift of the focus of performance creates a sharper self-awareness and may lead to more optimal movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Design and analysis of linear cascade DNA hybridization chain reactions using DNA hairpins

    NASA Astrophysics Data System (ADS)

    Bui, Hieu; Garg, Sudhanshu; Miao, Vincent; Song, Tianqi; Mokhtar, Reem; Reif, John

    2017-01-01

    DNA self-assembly has been employed non-conventionally to construct nanoscale structures and dynamic nanoscale machines. The technique of hybridization chain reactions by triggered self-assembly has been shown to form various interesting nanoscale structures ranging from simple linear DNA oligomers to dendritic DNA structures. Inspired by earlier triggered self-assembly works, we present a system for controlled self-assembly of linear cascade DNA hybridization chain reactions using nine distinct DNA hairpins. NUPACK is employed to assist in designing DNA sequences and Matlab has been used to simulate DNA hairpin interactions. Gel electrophoresis and ensemble fluorescence reaction kinetics data indicate strong evidence of linear cascade DNA hybridization chain reactions. The half-time completion of the proposed linear cascade reactions indicates a linear dependency on the number of hairpins.

  2. Detection of Genetically Modified Sugarcane by Using Terahertz Spectroscopy and Chemometrics

    NASA Astrophysics Data System (ADS)

    Liu, J.; Xie, H.; Zha, B.; Ding, W.; Luo, J.; Hu, C.

    2018-03-01

    A methodology is proposed to identify genetically modified sugarcane from non-genetically modified sugarcane by using terahertz spectroscopy and chemometrics techniques, including linear discriminant analysis (LDA), support vector machine-discriminant analysis (SVM-DA), and partial least squares-discriminant analysis (PLS-DA). The classification rate of the above mentioned methods is compared, and different types of preprocessing are considered. According to the experimental results, the best option is PLS-DA, with an identification rate of 98%. The results indicated that THz spectroscopy and chemometrics techniques are a powerful tool to identify genetically modified and non-genetically modified sugarcane.

  3. Non-destructive analysis of sensory traits of dry-cured loins by MRI-computer vision techniques and data mining.

    PubMed

    Caballero, Daniel; Antequera, Teresa; Caro, Andrés; Ávila, María Del Mar; G Rodríguez, Pablo; Perez-Palacios, Trinidad

    2017-07-01

    Magnetic resonance imaging (MRI) combined with computer vision techniques have been proposed as an alternative or complementary technique to determine the quality parameters of food in a non-destructive way. The aim of this work was to analyze the sensory attributes of dry-cured loins using this technique. For that, different MRI acquisition sequences (spin echo, gradient echo and turbo 3D), algorithms for MRI analysis (GLCM, NGLDM, GLRLM and GLCM-NGLDM-GLRLM) and predictive data mining techniques (multiple linear regression and isotonic regression) were tested. The correlation coefficient (R) and mean absolute error (MAE) were used to validate the prediction results. The combination of spin echo, GLCM and isotonic regression produced the most accurate results. In addition, the MRI data from dry-cured loins seems to be more suitable than the data from fresh loins. The application of predictive data mining techniques on computational texture features from the MRI data of loins enables the determination of the sensory traits of dry-cured loins in a non-destructive way. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  4. Non-linear optical techniques and optical properties of condensed molecular systems

    NASA Astrophysics Data System (ADS)

    Citroni, Margherita

    2013-06-01

    Structure, dynamics, and optical properties of molecular systems can be largely modified by the applied pressure, with remarkable consequences on their chemical stability. Several examples of selective reactions yielding technologically attractive products can be cited, which are particularly efficient when photochemical effects are exploited in conjunction with the structural conditions attained at high density. Non-linear optical techniques are a basic tool to unveil key aspects of the chemical reactivity and dynamic properties of molecules. Their application to high-pressure samples is experimentally challenging, mainly because of the small sample dimensions and of the non-linear effects generated in the anvil materials. In this talk I will present results on the electronic spectra of several aromatic crystals obtained through two-photon induced fluorescence and two-photon excitation profiles measured as a function of pressure (typically up to about 25 GPa), and discuss the relationship between the pressure-induced modifications of the electronic structure and the chemical reactivity at high pressure. I will also present the first successful pump-probe infrared measurement performed as a function of pressure on a condensed molecular system. The system under examination is liquid water, in a sapphire anvil cell, up to 1 GPa along isotherms at 298 and 363 K. These measurements give a new enlightening insight into the dynamical properties of low- and high-density water allowing a definition of the two structures.

  5. The Use of Non-Standard Devices in Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Schur, Willi W.; Broduer, Steve (Technical Monitor)

    2001-01-01

    A general mathematical description of the response behavior of thin-skin pneumatic envelopes and many other membrane and cable structures produces under-constrained systems that pose severe difficulties to analysis. These systems are mobile, and the general mathematical description exposes the mobility. Yet the response behavior of special under-constrained structures under special loadings can be accurately predicted using a constrained mathematical description. The static response behavior of systems that are infinitesimally mobile, such as a non-slack membrane subtended from a rigid or elastic boundary frame, can be easily analyzed using such general mathematical description as afforded by the non-linear, finite element method using an implicit solution scheme if the incremental uploading is guided through a suitable path. Similarly, if such structures are assembled with structural lack of fit that provides suitable self-stress, then dynamic response behavior can be predicted by the non-linear, finite element method and an implicit solution scheme. An explicit solution scheme is available for evolution problems. Such scheme can be used via the method of dynamic relaxation to obtain the solution to a static problem. In some sense, pneumatic envelopes and many other compliant structures can be said to have destiny under a specified loading system. What that means to the analyst is that what happens on the evolution path of the solution is irrelevant as long as equilibrium is achieved at destiny under full load and that the equilibrium is stable in the vicinity of that load. The purpose of this paper is to alert practitioners to the fact that non-standard procedures in finite element analysis are useful and can be legitimate although they burden their users with the requirement to use special caution. Some interesting findings that are useful to the US Scientific Balloon Program and that could not be obtained without non-standard techniques are presented.

  6. A Comparison of Potential IM-CW Lidar Modulation Techniques for ASCENDS CO2 Column Measurements From Space

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Ismail, Syed

    2014-01-01

    Global atmospheric carbon dioxide (CO2) measurements through the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) Decadal Survey recommended space mission are critical for improving our understanding of CO2 sources and sinks. IM-CW (Intensity Modulated Continuous Wave) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS science requirements. In previous laboratory and flight experiments we have successfully used linear swept frequency modulation to discriminate surface lidar returns from intermediate aerosol and cloud contamination. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate clouds, which is a requirement for the inversion of the CO2 column-mixing ratio from the instrument optical depth measurements, has been demonstrated with the linear swept frequency modulation technique. We are concurrently investigating advanced techniques to help improve the auto-correlation properties of the transmitted waveform implemented through physical hardware to make cloud rejection more robust in special restricted scenarios. Several different carrier based modulation techniques are compared including orthogonal linear swept, orthogonal non-linear swept, and Binary Phase Shift Keying (BPSK). Techniques are investigated that reduce or eliminate sidelobes. These techniques have excellent auto-correlation properties while possessing a finite bandwidth (by way of a new cyclic digital filter), which will reduce bias error in the presence of multiple scatterers. Our analyses show that the studied modulation techniques can increase the accuracy of CO2 column measurements from space. A comparison of various properties such as signal to noise ratio (SNR) and time-bandwidth product are discussed.

  7. Active galactic nuclei as cosmological probes.

    NASA Astrophysics Data System (ADS)

    Lusso, Elisabeta; Risaliti, Guido

    2018-01-01

    I will present the latest results on our analysis of the non-linear X-ray to UV relation in a sample of optically selected quasars from the Sloan Digital Sky Survey, cross-matched with the most recent XMM-Newton and Chandra catalogues. I will show that this correlation is not only very tight, but can be potentially even tighter by including a further dependence on the emission line full-width half maximum. This result imply that the non-linear X-ray to optical-ultraviolet luminosity relation is the manifestation of an ubiquitous physical mechanism, whose details are still unknown, that regulates the energy transfer from the accretion disc to the X-ray emitting corona in quasars. I will discuss what the perspectives of AGN in the context of observational cosmology are. I will introduce a novel technique to test the cosmological model using quasars as “standard candles” by employing the non-linear X-ray to UV relation as an absolute distance indicator.

  8. NASA's mobile satellite communications program; ground and space segment technologies

    NASA Technical Reports Server (NTRS)

    Naderi, F.; Weber, W. J.; Knouse, G. H.

    1984-01-01

    This paper describes the Mobile Satellite Communications Program of the United States National Aeronautics and Space Administration (NASA). The program's objectives are to facilitate the deployment of the first generation commercial mobile satellite by the private sector, and to technologically enable future generations by developing advanced and high risk ground and space segment technologies. These technologies are aimed at mitigating severe shortages of spectrum, orbital slot, and spacecraft EIRP which are expected to plague the high capacity mobile satellite systems of the future. After a brief introduction of the concept of mobile satellite systems and their expected evolution, this paper outlines the critical ground and space segment technologies. Next, the Mobile Satellite Experiment (MSAT-X) is described. MSAT-X is the framework through which NASA will develop advanced ground segment technologies. An approach is outlined for the development of conformal vehicle antennas, spectrum and power-efficient speech codecs, and modulation techniques for use in the non-linear faded channels and efficient multiple access schemes. Finally, the paper concludes with a description of the current and planned NASA activities aimed at developing complex large multibeam spacecraft antennas needed for future generation mobile satellite systems.

  9. On the complexity of a combined homotopy interior method for convex programming

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Xu, Qing; Feng, Guochen

    2007-03-01

    In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.

  10. Method and apparatus for measuring the intensity and phase of one or more ultrashort light pulses and for measuring optical properties of materials

    DOEpatents

    Trebino, Rick P.; DeLong, Kenneth W.

    1996-01-01

    The intensity and phase of one or more ultrashort light pulses are obtained using a non-linear optical medium. Information derived from the light pulses is also used to measure optical properties of materials. Various retrieval techniques are employed. Both "instantaneously" and "non-instantaneously" responding optical mediums may be used.

  11. On structural identifiability analysis of the cascaded linear dynamic systems in isotopically non-stationary 13C labelling experiments.

    PubMed

    Lin, Weilu; Wang, Zejian; Huang, Mingzhi; Zhuang, Yingping; Zhang, Siliang

    2018-06-01

    The isotopically non-stationary 13C labelling experiments, as an emerging experimental technique, can estimate the intracellular fluxes of the cell culture under an isotopic transient period. However, to the best of our knowledge, the issue of the structural identifiability analysis of non-stationary isotope experiments is not well addressed in the literature. In this work, the local structural identifiability analysis for non-stationary cumomer balance equations is conducted based on the Taylor series approach. The numerical rank of the Jacobian matrices of the finite extended time derivatives of the measured fractions with respect to the free parameters is taken as the criterion. It turns out that only one single time point is necessary to achieve the structural identifiability analysis of the cascaded linear dynamic system of non-stationary isotope experiments. The equivalence between the local structural identifiability of the cascaded linear dynamic systems and the local optimum condition of the nonlinear least squares problem is elucidated in the work. Optimal measurements sets can then be determined for the metabolic network. Two simulated metabolic networks are adopted to demonstrate the utility of the proposed method. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Oscilaciones estelares no-radiales: aplicación a configuraciones politrópicas y modelos de enanas blancas de He

    NASA Astrophysics Data System (ADS)

    Córsico, A. H.; Benvenuto, O. G.

    Recently in our Observatory we have developed a new Stellar Pulsation Code, independently of other workers. Such program computes eigenvalues (eigenfrequencies) and eigenfunctions of non-radial modes in spherical non-perturbated stellar models. To accomplish this calculations, the four order eigenvalue problem (in the linear adiabatic approach) is solved by means of the well-know technique of Henyey on the finite differences scheme wich replace to the differential equations of the problem. In order to test the Code, we have computed numerous eigenmodes in polytropic configurations for several values of index n. In this comunication we show the excelent agreement of our results and that best available in the literature. Also, we present results of oscillations in models of white dwarf stars with homogeneus chemical composition (pure Helium). This models have been obtained with the Evolution Stellar Code of our Observatory. The calculations outlined above conform a first preliminary step in a major proyect whose main purpose is the study of pulsational properties of DA, DB and DO white dwarfs stars. Detailed investigations have demonstrated that such objets pulsates in non-radial g-modes with eigenperiods in the range 100-2000 sec.

  13. Time-Frequency Analyses of Tide-Gauge Sensor Data

    PubMed Central

    Erol, Serdar

    2011-01-01

    The real world phenomena being observed by sensors are generally non-stationary in nature. The classical linear techniques for analysis and modeling natural time-series observations are inefficient and should be replaced by non-linear techniques of whose theoretical aspects and performances are varied. In this manner adopting the most appropriate technique and strategy is essential in evaluating sensors’ data. In this study, two different time-series analysis approaches, namely least squares spectral analysis (LSSA) and wavelet analysis (continuous wavelet transform, cross wavelet transform and wavelet coherence algorithms as extensions of wavelet analysis), are applied to sea-level observations recorded by tide-gauge sensors, and the advantages and drawbacks of these methods are reviewed. The analyses were carried out using sea-level observations recorded at the Antalya-II and Erdek tide-gauge stations of the Turkish National Sea-Level Monitoring System. In the analyses, the useful information hidden in the noisy signals was detected, and the common features between the two sea-level time series were clarified. The tide-gauge records have data gaps in time because of issues such as instrumental shortcomings and power outages. Concerning the difficulties of the time-frequency analysis of data with voids, the sea-level observations were preprocessed, and the missing parts were predicted using the neural network method prior to the analysis. In conclusion the merits and limitations of the techniques in evaluating non-stationary observations by means of tide-gauge sensors records were documented and an analysis strategy for the sequential sensors observations was presented. PMID:22163829

  14. Time-frequency analyses of tide-gauge sensor data.

    PubMed

    Erol, Serdar

    2011-01-01

    The real world phenomena being observed by sensors are generally non-stationary in nature. The classical linear techniques for analysis and modeling natural time-series observations are inefficient and should be replaced by non-linear techniques of whose theoretical aspects and performances are varied. In this manner adopting the most appropriate technique and strategy is essential in evaluating sensors' data. In this study, two different time-series analysis approaches, namely least squares spectral analysis (LSSA) and wavelet analysis (continuous wavelet transform, cross wavelet transform and wavelet coherence algorithms as extensions of wavelet analysis), are applied to sea-level observations recorded by tide-gauge sensors, and the advantages and drawbacks of these methods are reviewed. The analyses were carried out using sea-level observations recorded at the Antalya-II and Erdek tide-gauge stations of the Turkish National Sea-Level Monitoring System. In the analyses, the useful information hidden in the noisy signals was detected, and the common features between the two sea-level time series were clarified. The tide-gauge records have data gaps in time because of issues such as instrumental shortcomings and power outages. Concerning the difficulties of the time-frequency analysis of data with voids, the sea-level observations were preprocessed, and the missing parts were predicted using the neural network method prior to the analysis. In conclusion the merits and limitations of the techniques in evaluating non-stationary observations by means of tide-gauge sensors records were documented and an analysis strategy for the sequential sensors observations was presented.

  15. Assessing the impact of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping

    2014-05-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.

  16. Detector noise statistics in the non-linear regime

    NASA Technical Reports Server (NTRS)

    Shopbell, P. L.; Bland-Hawthorn, J.

    1992-01-01

    The statistical behavior of an idealized linear detector in the presence of threshold and saturation levels is examined. It is assumed that the noise is governed by the statistical fluctuations in the number of photons emitted by the source during an exposure. Since physical detectors cannot have infinite dynamic range, our model illustrates that all devices have non-linear regimes, particularly at high count rates. The primary effect is a decrease in the statistical variance about the mean signal due to a portion of the expected noise distribution being removed via clipping. Higher order statistical moments are also examined, in particular, skewness and kurtosis. In principle, the expected distortion in the detector noise characteristics can be calibrated using flatfield observations with count rates matched to the observations. For this purpose, some basic statistical methods that utilize Fourier analysis techniques are described.

  17. In-vivo Imaging of Magnetic Fields Induced by Transcranial Direct Current Stimulation (tDCS) in Human Brain using MRI

    NASA Astrophysics Data System (ADS)

    Jog, Mayank V.; Smith, Robert X.; Jann, Kay; Dunn, Walter; Lafon, Belen; Truong, Dennis; Wu, Allan; Parra, Lucas; Bikson, Marom; Wang, Danny J. J.

    2016-10-01

    Transcranial direct current stimulation (tDCS) is an emerging non-invasive neuromodulation technique that applies mA currents at the scalp to modulate cortical excitability. Here, we present a novel magnetic resonance imaging (MRI) technique, which detects magnetic fields induced by tDCS currents. This technique is based on Ampere’s law and exploits the linear relationship between direct current and induced magnetic fields. Following validation on a phantom with a known path of electric current and induced magnetic field, the proposed MRI technique was applied to a human limb (to demonstrate in-vivo feasibility using simple biological tissue) and human heads (to demonstrate feasibility in standard tDCS applications). The results show that the proposed technique detects tDCS induced magnetic fields as small as a nanotesla at millimeter spatial resolution. Through measurements of magnetic fields linearly proportional to the applied tDCS current, our approach opens a new avenue for direct in-vivo visualization of tDCS target engagement.

  18. Kinematical calculations of RHEED intensity oscillations during the growth of thin epitaxial films

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2005-08-01

    A practical computing algorithm working in real time has been developed for calculating the reflection high-energy electron diffraction (RHEED) from the molecular beam epitaxy (MBE) growing surface. The calculations are based on the use of kinematical diffraction theory. Simple mathematical models are used for the growth simulation in order to investigate the fundamental behaviors of reflectivity change during the growth of thin epitaxial films prepared using MBE. Program summaryTitle of program:GROWTH Catalogue identifier:ADVL Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which is has been tested:Pentium-based PC Operating systems or monitors under which the program has been tested:Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 10 989 Number of bytes in distributed program, including test data, etc.:103 048 Nature of the physical problem:Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The simplest approach to calculating the RHEED intensity during the growth of thin epitaxial films is the kinematical diffraction theory (often called kinematical approximation), in which only a single scattering event is taken into account. The biggest advantage of this approach is that we can calculate RHEED intensity in real time. Also, the approach facilitates intuitive understanding of the growth mechanism and surface morphology [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222]. Method of solution:Epitaxial growth of thin films is modeled by a set of non-linear differential equations [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing; first ed., Cambridge University Press, 1989; See also: Numerical Recipes in C++, second ed., Cambridge University Press, 1992]. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is distributed in the form of a main project Growth.dpr file and an independent Rhd.pas file and should be compiled using Object Pascal compilers, including Borland Delphi.

  19. The application of MINIQUASI to thermal program boundary and initial value problems

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The feasibility of applying the solution techniques of Miniquasi to the set of equations which govern a thermoregulatory model is investigated. For solving nonlinear equations and/or boundary conditions, a Taylor Series expansion is required for linearization of both equations and boundary conditions. The solutions are iterative and in each iteration, a problem like the linear case is solved. It is shown that Miniquasi cannot be applied to the thermoregulatory model as originally planned.

  20. An improved technique for determining reflection from semi-infinite atmospheres with linearly anisotropic phase functions. [radiative transfer

    NASA Technical Reports Server (NTRS)

    Fricke, C. L.

    1975-01-01

    A solution to the problem of reflection from a semi-infinite atmosphere is presented, based upon Chandrasekhar's H-function method for linearly anisotropic phase functions. A modification to the Gauss quadrature formula which gives about the same accuracy with 10 points as the conventional Gauss quadrature does with 100 points was developed. A computer program achieving this solution is described and results are presented for several illustrative cases.

  1. HYMOSS signal processing for pushbroom spectral imaging

    NASA Technical Reports Server (NTRS)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  2. HYMOSS signal processing for pushbroom spectral imaging

    NASA Astrophysics Data System (ADS)

    Ludwig, David E.

    1991-06-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  3. Minimal-resource computer program for automatic generation of ocean wave ray or crest diagrams in shoaling waters

    NASA Technical Reports Server (NTRS)

    Poole, L. R.; Lecroy, S. R.; Morris, W. D.

    1977-01-01

    A computer program for studying linear ocean wave refraction is described. The program features random-access modular bathymetry data storage. Three bottom topography approximation techniques are available in the program which provide varying degrees of bathymetry data smoothing. Refraction diagrams are generated automatically and can be displayed graphically in three forms: Ray patterns with specified uniform deepwater ray density, ray patterns with controlled nearshore ray density, or crest patterns constructed by using a cubic polynomial to approximate crest segments between adjacent rays.

  4. Vehicle systems and payload requirements evaluation. [computer programs for identifying launch vehicle system requirements

    NASA Technical Reports Server (NTRS)

    Rea, F. G.; Pittenger, J. L.; Conlon, R. J.; Allen, J. D.

    1975-01-01

    Techniques developed for identifying launch vehicle system requirements for NASA automated space missions are discussed. Emphasis is placed on development of computer programs and investigation of astrionics for OSS missions and Scout. The Earth Orbit Mission Program - 1 which performs linear error analysis of launch vehicle dispersions for both vehicle and navigation system factors is described along with the Interactive Graphic Orbit Selection program which allows the user to select orbits which satisfy mission requirements and to evaluate the necessary injection accuracy.

  5. Mathematical Techniques for Nonlinear System Theory.

    DTIC Science & Technology

    1978-01-01

    4. TITLE (and Subtitle) 5. TYPE OF REPORT 6 PERIOD COVERED MATHEMATICAL TECHNIQUES FOR NONLINEAR SYSTEM THEORY Interim 6...ADDRESS 10. PROGRAM ELEMENT. PROJECT . TASK AREA & WORK UNIT NUMBERS Unlvers].ty of Flori.da Center for Mathematical System Theory ~~~~ Gainesville , FL...rings”, Mathematical System Theory , 9: 327—344. E. D. SONTAG (1976b1 “Linear systems over commutative rings: a survey”, Richerche di Automatica, 7: 1-34

  6. Mathematical analysis techniques for modeling the space network activities

    NASA Technical Reports Server (NTRS)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  7. A Training Program in Breast Cancer Research Using NMR Techniques

    DTIC Science & Technology

    2005-07-01

    to explore the application NMR molecular imaging techniques developed in this program in detection of amyloid plaques in the Alzheimer diseased mouse...one is to utilize the molecular imaging technique to exploit new application in imaging of amyloid plaques in Alzheimer disease. A abridge of each...matched, non-demented elderly suggests that volumetric studies of ante-mortem neuroimages may provide an early marker of AD in aging populations. In

  8. ALPS - A LINEAR PROGRAM SOLVER

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  9. Delineating chalk sand distribution of Ekofisk formation using probabilistic neural network (PNN) and stepwise regression (SWR): Case study Danish North Sea field

    NASA Astrophysics Data System (ADS)

    Haris, A.; Nafian, M.; Riyanto, A.

    2017-07-01

    Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.

  10. Linear-array based full-view high-resolution photoacoustic computed tomography of whole mouse brain functions in vivo

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Pengfei; Wang, Lihong V.

    2018-02-01

    Photoacoustic computed tomography (PACT) is a non-invasive imaging technique offering high contrast, high resolution, and deep penetration in biological tissues. We report a photoacoustic computed tomography (PACT) system equipped with a high frequency linear array for anatomical and functional imaging of the mouse whole brain. The linear array was rotationally scanned in the coronal plane to achieve the full-view coverage. We investigated spontaneous neural activities in the deep brain by monitoring the hemodynamics and observed strong interhemispherical correlations between contralateral regions, both in the cortical layer and in the deep regions.

  11. Describing Function Techniques for the Non-Linear Analysis of the Dynamics of a Rail Vehicle Wheelset

    DOT National Transportation Integrated Search

    1975-07-01

    The describing function method of analysis is applied to investigate the influence of parametric variations on wheelset critical velocity. In addition, the relationship between the amplitude of sustained lateral oscillations and critical speed is der...

  12. Wavelet packets for multi- and hyper-spectral imagery

    NASA Astrophysics Data System (ADS)

    Benedetto, J. J.; Czaja, W.; Ehler, M.; Flake, C.; Hirn, M.

    2010-01-01

    State of the art dimension reduction and classification schemes in multi- and hyper-spectral imaging rely primarily on the information contained in the spectral component. To better capture the joint spatial and spectral data distribution we combine the Wavelet Packet Transform with the linear dimension reduction method of Principal Component Analysis. Each spectral band is decomposed by means of the Wavelet Packet Transform and we consider a joint entropy across all the spectral bands as a tool to exploit the spatial information. Dimension reduction is then applied to the Wavelet Packets coefficients. We present examples of this technique for hyper-spectral satellite imaging. We also investigate the role of various shrinkage techniques to model non-linearity in our approach.

  13. Detection of linear polarization from SNR Cassiopeia A at low radio frequencies

    NASA Astrophysics Data System (ADS)

    Raja, Wasim; Deshpande, A.

    We report detection of the weak but significant linear polarization from the Supernova Remnant Cas A at low radio frequencies (327 MHz) using the GMRT. The spectro-polarimetric data was analyzed using the new technique of Faraday Tomography (RM-synthesis). The problems of disentangling weak sky polarization from any residual instrumental polarization is discussed. A novel technique to establish association of the apparent polarization to the source, even in the presence of instrumental leakage is demonstrated. The anti-correlation of the polarized emission with soft X-ray counts seen at various Faraday-depths provides direct evidence of the co-existence of thermal and non-thermal plasmas within the source.

  14. Numerical techniques for solving nonlinear instability problems in smokeless tactical solid rocket motors. [finite difference technique

    NASA Technical Reports Server (NTRS)

    Baum, J. D.; Levine, J. N.

    1980-01-01

    The selection of a satisfactory numerical method for calculating the propagation of steep fronted shock life waveforms in a solid rocket motor combustion chamber is discussed. A number of different numerical schemes were evaluated by comparing the results obtained for three problems: the shock tube problems; the linear wave equation, and nonlinear wave propagation in a closed tube. The most promising method--a combination of the Lax-Wendroff, Hybrid and Artificial Compression techniques, was incorporated into an existing nonlinear instability program. The capability of the modified program to treat steep fronted wave instabilities in low smoke tactical motors was verified by solving a number of motor test cases with disturbance amplitudes as high as 80% of the mean pressure.

  15. Monthly reservoir inflow forecasting using a new hybrid SARIMA genetic programming approach

    NASA Astrophysics Data System (ADS)

    Moeeni, Hamid; Bonakdari, Hossein; Ebtehaj, Isa

    2017-03-01

    Forecasting reservoir inflow is one of the most important components of water resources and hydroelectric systems operation management. Seasonal autoregressive integrated moving average (SARIMA) models have been frequently used for predicting river flow. SARIMA models are linear and do not consider the random component of statistical data. To overcome this shortcoming, monthly inflow is predicted in this study based on a combination of seasonal autoregressive integrated moving average (SARIMA) and gene expression programming (GEP) models, which is a new hybrid method (SARIMA-GEP). To this end, a four-step process is employed. First, the monthly inflow datasets are pre-processed. Second, the datasets are modelled linearly with SARIMA and in the third stage, the non-linearity of residual series caused by linear modelling is evaluated. After confirming the non-linearity, the residuals are modelled in the fourth step using a gene expression programming (GEP) method. The proposed hybrid model is employed to predict the monthly inflow to the Jamishan Dam in west Iran. Thirty years' worth of site measurements of monthly reservoir dam inflow with extreme seasonal variations are used. The results of this hybrid model (SARIMA-GEP) are compared with SARIMA, GEP, artificial neural network (ANN) and SARIMA-ANN models. The results indicate that the SARIMA-GEP model ( R 2=78.8, VAF =78.8, RMSE =0.89, MAPE =43.4, CRM =0.053) outperforms SARIMA and GEP and SARIMA-ANN ( R 2=68.3, VAF =66.4, RMSE =1.12, MAPE =56.6, CRM =0.032) displays better performance than the SARIMA and ANN models. A comparison of the two hybrid models indicates the superiority of SARIMA-GEP over the SARIMA-ANN model.

  16. SPX: The Tenth International Conference on Stochastic Programming

    DTIC Science & Technology

    2004-10-01

    On structuring energy contract portfolios in competitive markets . Antonio Alonso-Ayuso, Universidad Rey Juan Carlos. (p. 28) 2. Mean-risk optimization ...ThA 8:00-9:30 Ballroom South: Portfolio Optimization Chair: Gerd Infanger, Stanford University 1. The impact of serial correlation of returns on ... the L-shaped method is to approximate the non-linear penalty term in the objective by a linear one . We use the implicit LX

  17. Magnetorotational instability: nonmodal growth and the relationship of global modes to the shearing box

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Squire, J.; Bhattacharjee, A.

    2014-12-10

    We study magnetorotational instability (MRI) using nonmodal stability techniques. Despite the spectral instability of many forms of MRI, this proves to be a natural method of analysis that is well-suited to deal with the non-self-adjoint nature of the linear MRI equations. We find that the fastest growing linear MRI structures on both local and global domains can look very different from the eigenmodes, invariably resembling waves shearing with the background flow (shear waves). In addition, such structures can grow many times faster than the least stable eigenmode over long time periods, and be localized in a completely different region ofmore » space. These ideas lead—for both axisymmetric and non-axisymmetric modes—to a natural connection between the global MRI and the local shearing box approximation. By illustrating that the fastest growing global structure is well described by the ordinary differential equations (ODEs) governing a single shear wave, we find that the shearing box is a very sensible approximation for the linear MRI, contrary to many previous claims. Since the shear wave ODEs are most naturally understood using nonmodal analysis techniques, we conclude by analyzing local MRI growth over finite timescales using these methods. The strong growth over a wide range of wave-numbers suggests that nonmodal linear physics could be of fundamental importance in MRI turbulence.« less

  18. Linear signatures in nonlinear gyrokinetics: interpreting turbulence with pseudospectra

    DOE PAGES

    Hatch, D. R.; Jenko, F.; Navarro, A. Banon; ...

    2016-07-26

    A notable feature of plasma turbulence is its propensity to retain features of the underlying linear eigenmodes in a strongly turbulent state—a property that can be exploited to predict various aspects of the turbulence using only linear information. In this context, this work examines gradient-driven gyrokinetic plasma turbulence through three lenses—linear eigenvalue spectra, pseudospectra, and singular value decomposition (SVD). We study a reduced gyrokinetic model whose linear eigenvalue spectra include ion temperature gradient driven modes, stable drift waves, and kinetic modes representing Landau damping. The goal is to characterize in which ways, if any, these familiar ingredients are manifest inmore » the nonlinear turbulent state. This pursuit is aided by the use of pseudospectra, which provide a more nuanced view of the linear operator by characterizing its response to perturbations. We introduce a new technique whereby the nonlinearly evolved phase space structures extracted with SVD are linked to the linear operator using concepts motivated by pseudospectra. Using this technique, we identify nonlinear structures that have connections to not only the most unstable eigenmode but also subdominant modes that are nonlinearly excited. The general picture that emerges is a system in which signatures of the linear physics persist in the turbulence, albeit in ways that cannot be fully explained by the linear eigenvalue approach; a non-modal treatment is necessary to understand key features of the turbulence.« less

  19. Effect of Processing Conditions on the Anelastic Behavior of Plasma Sprayed Thermal Barrier Coatings

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaishak

    2011-12-01

    Plasma sprayed ceramic materials contain an assortment of micro-structural defects, including pores, cracks, and interfaces arising from the droplet based assemblage of the spray deposition technique. The defective architecture of the deposits introduces a novel "anelastic" response in the coatings comprising of their non-linear and hysteretic stress-strain relationship under mechanical loading. It has been established that this anelasticity can be attributed to the relative movement of the embedded defects under varying stresses. While the non-linear response of the coatings arises from the opening/closure of defects, hysteresis is produced by the frictional sliding among defect surfaces. Recent studies have indicated that anelastic behavior of coatings can be a unique descriptor of their mechanical behavior and related to the defect configuration. In this dissertation, a multi-variable study employing systematic processing strategies was conducted to augment the understanding on various aspects of the reported anelastic behavior. A bi-layer curvature measurement technique was adapted to measure the anelastic properties of plasma sprayed ceramic. The quantification of anelastic parameters was done using a non-linear model proposed by Nakamura et.al. An error analysis was conducted on the technique to know the available margins for both experimental as well as computational errors. The error analysis was extended to evaluate its sensitivity towards different coating microstructure. For this purpose, three coatings with significantly different microstructures were fabricated via tuning of process parameters. Later the three coatings were also subjected to different strain ranges systematically, in order to understand the origin and evolution of anelasticity on different microstructures. The last segment of this thesis attempts to capture the intricacies on the processing front and tries to evaluate and establish a correlation between them and the anelastic parameters.

  20. Linear programming phase unwrapping for dual-wavelength digital holography.

    PubMed

    Wang, Zhaomin; Jiao, Jiannan; Qu, Weijuan; Yang, Fang; Li, Hongru; Tian, Ailing; Asundi, Anand

    2017-01-20

    A linear programming phase unwrapping method in dual-wavelength digital holography is proposed and verified experimentally. The proposed method uses the square of height difference as a convergence standard and theoretically gives the boundary condition in a searching process. A simulation was performed by unwrapping step structures at different levels of Gaussian noise. As a result, our method is capable of recovering the discontinuities accurately. It is robust and straightforward. In the experiment, a microelectromechanical systems sample and a cylindrical lens were measured separately. The testing results were in good agreement with true values. Moreover, the proposed method is applicable not only in digital holography but also in other dual-wavelength interferometric techniques.

  1. Learning directed acyclic graphs from large-scale genomics data.

    PubMed

    Nikolay, Fabio; Pesavento, Marius; Kritikos, George; Typas, Nassos

    2017-09-20

    In this paper, we consider the problem of learning the genetic interaction map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double-knockout (DK) data. Based on a set of well-established biological interaction models, we detect and classify the interactions between genes. We propose a novel linear integer optimization program called the Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies among genes and to compute the DAG topology that matches the DK measurements best. Furthermore, we extend the GENIE program by incorporating genetic interaction profile (GI-profile) data to further enhance the detection performance. In addition, we propose a sequential scalability technique for large sets of genes under study, in order to provide statistically significant results for real measurement data. Finally, we show via numeric simulations that the GENIE program and the GI-profile data extended GENIE (GI-GENIE) program clearly outperform the conventional techniques and present real data results for our proposed sequential scalability technique.

  2. Non-invasive current and voltage imaging techniques for integrated circuits using scanning probe microscopy. Final report, LDRD Project FY93 and FY94

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, A.N.; Cole, E.I. Jr.; Tangyunyong, Paiboon

    This report describes the first practical, non-invasive technique for detecting and imaging currents internal to operating integrated circuits (ICs). This technique is based on magnetic force microscopy and was developed under Sandia National Laboratories` LDRD (Laboratory Directed Research and Development) program during FY 93 and FY 94. LDRD funds were also used to explore a related technique, charge force microscopy, for voltage probing of ICs. This report describes the technical work performed under this LDRD as well as the outcomes of the project in terms of publications and awards, intellectual property and licensing, synergistic work, potential future work, hiring ofmore » additional permanent staff, and benefits to DOE`s defense programs (DP).« less

  3. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  4. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  5. Polymerization shrinkage of a dental resin composite determined by a fiber optic Fizeau interferometer

    NASA Astrophysics Data System (ADS)

    Arenas, Gustavo; Noriega, Sergio; Vallo, Claudia; Duchowicz, Ricardo

    2007-03-01

    A fiber optic sensing method based on a Fizeau-type interferometric scheme was employed for monitoring linear polymerization shrinkage in dental restoratives. This technique offers several advantages over the conventional methods of measuring polymerization contraction. This simple, compact, non-invasive and self-calibrating system competes with both conventional and other high-resolution bulk interferometric techniques. In this work, an analysis of the quality of interference signal and fringes visibility was performed in order to characterize their resolution and application range. The measurements of percent linear contraction as a function of the sample thickness were carried out in this study on two dental composites: Filtek P60 (3M ESPE) Posterior Restorer and Filtek Z250 (3M ESPE) Universal Restorer. The results were discussed with respect to others obtained employing alternative techniques.

  6. Analysis of Vlbi, Slr and GPS Site Position Time Series

    NASA Astrophysics Data System (ADS)

    Angermann, D.; Krügel, M.; Meisel, B.; Müller, H.; Tesmer, V.

    Conventionally the IERS terrestrial reference frame (ITRF) is realized by the adoption of a set of epoch coordinates and linear velocities for a set of global tracking stations. Due to the remarkable progress of the space geodetic observation techniques (e.g. VLBI, SLR, GPS) the accuracy and consistency of the ITRF increased continuously. The accuracy achieved today is mainly limited by technique-related systematic errors, which are often poorly characterized or quantified. Therefore it is essential to analyze the individual techniques' solutions with respect to systematic differences, models, parameters, datum definition, etc. Main subject of this presentation is the analysis of GPS, SLR and VLBI time series of site positions. The investigations are based on SLR and VLBI solutions computed at DGFI with the software systems DOGS (SLR) and OCCAM (VLBI). The GPS time series are based on weekly IGS station coordinates solutions. We analyze the time series with respect to the issues mentioned above. In particular we characterize the noise in the time series, identify periodic signals, and investigate non-linear effects that complicate the assignment of linear velocities for global tracking sites. One important aspect is the comparison of results obtained by different techniques at colocation sites.

  7. Soft tissue strain measurement using an optical method

    NASA Astrophysics Data System (ADS)

    Toh, Siew Lok; Tay, Cho Jui; Goh, Cho Hong James

    2008-11-01

    Digital image correlation (DIC) is a non-contact optical technique that allows the full-field estimation of strains on a surface under an applied deformation. In this project, the application of an optimized DIC technique is applied, which can achieve efficiency and accuracy in the measurement of two-dimensional deformation fields in soft tissue. This technique relies on matching the random patterns recorded in images to directly obtain surface displacements and to get displacement gradients from which the strain field can be determined. Digital image correlation is a well developed technique that has numerous and varied engineering applications, including the application in soft and hard tissue biomechanics. Chicken drumstick ligaments were harvested and used during the experiments. The surface of the ligament was speckled with black paint to allow for correlation to be done. Results show that the stress-strain curve exhibits a bi-linear behavior i.e. a "toe region" and a "linear elastic region". The Young's modulus obtained for the toe region is about 92 MPa and the modulus for the linear elastic region is about 230 MPa. The results are within the values for mammalian anterior cruciate ligaments of 150-300 MPa.

  8. Equations of motion for coupled n-body systems

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1980-01-01

    Computer program, developed to analyze spacecraft attitude dynamics, can be applied to large class of problems involving objects that can be simplified into component parts. Systems of coupled rigid bodies, point masses, symmetric wheels, and elastically flexible bodies can be analyzed. Program derives complete set of non-linear equations of motion in vectordyadic format. Numerical solutions may be printed out. Program is in FORTRAN IV for batch execution and has been implemented on IBM 360.

  9. Multivariate statistical analysis: Principles and applications to coorbital streams of meteorite falls

    NASA Technical Reports Server (NTRS)

    Wolf, S. F.; Lipschutz, M. E.

    1993-01-01

    Multivariate statistical analysis techniques (linear discriminant analysis and logistic regression) can provide powerful discrimination tools which are generally unfamiliar to the planetary science community. Fall parameters were used to identify a group of 17 H chondrites (Cluster 1) that were part of a coorbital stream which intersected Earth's orbit in May, from 1855 - 1895, and can be distinguished from all other H chondrite falls. Using multivariate statistical techniques, it was demonstrated that a totally different criterion, labile trace element contents - hence thermal histories - or 13 Cluster 1 meteorites are distinguishable from those of 45 non-Cluster 1 H chondrites. Here, we focus upon the principles of multivariate statistical techniques and illustrate their application using non-meteoritic and meteoritic examples.

  10. Evaluation of non-intrusive flow measurement techniques for a re-entry flight experiment

    NASA Technical Reports Server (NTRS)

    Miles, R. B.; Santavicca, D. A.; Zimmermann, M.

    1983-01-01

    This study evaluates various non-intrusive techniques for the measurement of the flow field on the windward side of the Space Shuttle orbiter or a similar reentry vehicle. Included are linear (Rayleigh, Raman, Mie, Laser Doppler Velocimetry, Resonant Doppler Velocimetry) and nonlinear (Coherent Anti-Stokes Raman, Laser-Induced Fluorescence) light scattering, electron-beam fluorescence, thermal emission, and mass spectroscopy. Flow-field properties were taken from a nonequilibrium flow model by Shinn, Moss, and Simmonds at the NASA Langley Research Center. Conclusions are, when possible, based on quantitative scaling of known laboratory results to the conditions projected. Detailed discussion with researchers in the field contributed further to these conclusions and provided valuable insights regarding the experimental feasibility of each of the techniques.

  11. The Multiple Correspondence Analysis Method and Brain Functional Connectivity: Its Application to the Study of the Non-linear Relationships of Motor Cortex and Basal Ganglia.

    PubMed

    Rodriguez-Sabate, Clara; Morales, Ingrid; Sanchez, Alberto; Rodriguez, Manuel

    2017-01-01

    The complexity of basal ganglia (BG) interactions is often condensed into simple models mainly based on animal data and that present BG in closed-loop cortico-subcortical circuits of excitatory/inhibitory pathways which analyze the incoming cortical data and return the processed information to the cortex. This study was aimed at identifying functional relationships in the BG motor-loop of 24 healthy-subjects who provided written, informed consent and whose BOLD-activity was recorded by MRI methods. The analysis of the functional interaction between these centers by correlation techniques and multiple linear regression showed non-linear relationships which cannot be suitably addressed with these methods. The multiple correspondence analysis (MCA), an unsupervised multivariable procedure which can identify non-linear interactions, was used to study the functional connectivity of BG when subjects were at rest. Linear methods showed different functional interactions expected according to current BG models. MCA showed additional functional interactions which were not evident when using lineal methods. Seven functional configurations of BG were identified with MCA, two involving the primary motor and somatosensory cortex, one involving the deepest BG (external-internal globus pallidum, subthalamic nucleus and substantia nigral), one with the input-output BG centers (putamen and motor thalamus), two linking the input-output centers with other BG (external pallidum and subthalamic nucleus), and one linking the external pallidum and the substantia nigral. The results provide evidence that the non-linear MCA and linear methods are complementary and should be best used in conjunction to more fully understand the nature of functional connectivity of brain centers.

  12. A Computer Program for Practical Semivariogram Modeling and Ordinary Kriging: A Case Study of Porosity Distribution in an Oil Field

    NASA Astrophysics Data System (ADS)

    Mert, Bayram Ali; Dag, Ahmet

    2017-12-01

    In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.

  13. Non-linear effects of the built environment on automobile-involved pedestrian crash frequency: A machine learning approach.

    PubMed

    Ding, Chuan; Chen, Peng; Jiao, Junfeng

    2018-03-01

    Although a growing body of literature focuses on the relationship between the built environment and pedestrian crashes, limited evidence is provided about the relative importance of many built environment attributes by accounting for their mutual interaction effects and their non-linear effects on automobile-involved pedestrian crashes. This study adopts the approach of Multiple Additive Poisson Regression Trees (MAPRT) to fill such gaps using pedestrian collision data collected from Seattle, Washington. Traffic analysis zones are chosen as the analytical unit. The effects of various factors on pedestrian crash frequency investigated include characteristics the of road network, street elements, land use patterns, and traffic demand. Density and the degree of mixed land use have major effects on pedestrian crash frequency, accounting for approximately 66% of the effects in total. More importantly, some factors show clear non-linear relationships with pedestrian crash frequency, challenging the linearity assumption commonly used in existing studies which employ statistical models. With various accurately identified non-linear relationships between the built environment and pedestrian crashes, this study suggests local agencies to adopt geo-spatial differentiated policies to establish a safe walking environment. These findings, especially the effective ranges of the built environment, provide evidence to support for transport and land use planning, policy recommendations, and road safety programs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    PubMed Central

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2011-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994

  15. Linear Water Waves

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N.; Maz'ya, V.; Vainberg, B.

    2002-08-01

    This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'

  16. Twist Model Development and Results from the Active Aeroelastic Wing F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Lizotte, Andrew M.; Allen, Michael J.

    2007-01-01

    Understanding the wing twist of the active aeroelastic wing (AAW) F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption. This technique produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.

  17. Twist Model Development and Results From the Active Aeroelastic Wing F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Lizotte, Andrew; Allen, Michael J.

    2005-01-01

    Understanding the wing twist of the active aeroelastic wing F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption and by using neural networks. These techniques produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.

  18. An improved exploratory search technique for pure integer linear programming problems

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1990-01-01

    The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.

  19. An accelerated proximal augmented Lagrangian method and its application in compressive sensing.

    PubMed

    Sun, Min; Liu, Jing

    2017-01-01

    As a first-order method, the augmented Lagrangian method (ALM) is a benchmark solver for linearly constrained convex programming, and in practice some semi-definite proximal terms are often added to its primal variable's subproblem to make it more implementable. In this paper, we propose an accelerated PALM with indefinite proximal regularization (PALM-IPR) for convex programming with linear constraints, which generalizes the proximal terms from semi-definite to indefinite. Under mild assumptions, we establish the worst-case [Formula: see text] convergence rate of PALM-IPR in a non-ergodic sense. Finally, numerical results show that our new method is feasible and efficient for solving compressive sensing.

  20. Simulations of Coherent Synchrotron Radiation Effects in Electron Machines

    NASA Astrophysics Data System (ADS)

    Migliorati, M.; Schiavi, A.; Dattoli, G.

    2007-09-01

    Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.

  1. Simulations of Coherent Synchrotron Radiation Effects in Electron Machines

    NASA Astrophysics Data System (ADS)

    Migliorati, M.; Schiavi, A.; Dattoli, G.

    Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.

  2. QR code-based non-linear image encryption using Shearlet transform and spiral phase transform

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan

    2018-02-01

    In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.

  3. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  4. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  5. Engineering Overview of a Multidisciplinary HSCT Design Framework Using Medium-Fidelity Analysis Codes

    NASA Technical Reports Server (NTRS)

    Weston, R. P.; Green, L. L.; Salas, A. O.; Samareh, J. A.; Townsend, J. C.; Walsh, J. L.

    1999-01-01

    An objective of the HPCC Program at NASA Langley has been to promote the use of advanced computing techniques to more rapidly solve the problem of multidisciplinary optimization of a supersonic transport configuration. As a result, a software system has been designed and is being implemented to integrate a set of existing discipline analysis codes, some of them CPU-intensive, into a distributed computational framework for the design of a High Speed Civil Transport (HSCT) configuration. The proposed paper will describe the engineering aspects of integrating these analysis codes and additional interface codes into an automated design system. The objective of the design problem is to optimize the aircraft weight for given mission conditions, range, and payload requirements, subject to aerodynamic, structural, and performance constraints. The design variables include both thicknesses of structural elements and geometric parameters that define the external aircraft shape. An optimization model has been adopted that uses the multidisciplinary analysis results and the derivatives of the solution with respect to the design variables to formulate a linearized model that provides input to the CONMIN optimization code, which outputs new values for the design variables. The analysis process begins by deriving the updated geometries and grids from the baseline geometries and grids using the new values for the design variables. This free-form deformation approach provides internal FEM (finite element method) grids that are consistent with aerodynamic surface grids. The next step involves using the derived FEM and section properties in a weights process to calculate detailed weights and the center of gravity location for specified flight conditions. The weights process computes the as-built weight, weight distribution, and weight sensitivities for given aircraft configurations at various mass cases. Currently, two mass cases are considered: cruise and gross take-off weight (GTOW). Weights information is obtained from correlations of data from three sources: 1) as-built initial structural and non-structural weights from an existing database, 2) theoretical FEM structural weights and sensitivities from Genesis, and 3) empirical as-built weight increments, non-structural weights, and weight sensitivities from FLOPS. For the aeroelastic analysis, a variable-fidelity aerodynamic analysis has been adopted. This approach uses infrequent CPU-intensive non-linear CFD to calculate a non-linear correction relative to a linear aero calculation for the same aerodynamic surface at an angle of attack that results in the same configuration lift. For efficiency, this nonlinear correction is applied after each subsequent linear aero solution during the iterations between the aerodynamic and structural analyses. Convergence is achieved when the vehicle shape being used for the aerodynamic calculations is consistent with the structural deformations caused by the aerodynamic loads. To make the structural analyses more efficient, a linearized structural deformation model has been adopted, in which a single stiffness matrix can be used to solve for the deformations under all the load conditions. Using the converged aerodynamic loads, a final set of structural analyses are performed to determine the stress distributions and the buckling conditions for constraint calculation. Performance constraints are obtained by running FLOPS using drag polars that are computed using results from non-linear corrections to the linear aero code plus several codes to provide drag increments due to skin friction, wave drag, and other miscellaneous drag contributions. The status of the integration effort will be presented in the proposed paper, and results will be provided that illustrate the degree of accuracy in the linearizations that have been employed.

  6. Derivative pricing with non-linear Fokker-Planck dynamics

    NASA Astrophysics Data System (ADS)

    Michael, Fredrick; Johnson, M. D.

    2003-06-01

    We examine how the Black-Scholes derivative pricing formula is modified when the underlying security obeys non-extensive statistics and Fokker-Planck dynamics. An unusual feature of such securities is that the volatility in the underlying Ito-Langevin equation depends implicitly on the actual market rate of return. This complicates most approaches to valuation. Here we show that progress is possible using variations of the Cox-Ross valuation technique.

  7. Social Data Analysis by Non-Linear Imbedding

    DTIC Science & Technology

    2013-09-20

    Fig. 1 shows this dimenion-reduced galaxy. This example is chosen to illustrate how our “ history independent” techniques can infer major historical...DISTRIBUTION A: Distribution approved for public release. 1989 1990 1991 Middle East 2 7 6 Weapon Nonproliferation 2 6 5 Anti- Apartheid & Human Rights...the Non-Proliferation of Nuclear Weapons) #3570 (Status of the International Convention on the Suppression and Punishment of the crime of Apartheid

  8. Stimulation of a turbofan engine for evaluation of multivariable optimal control concepts. [(computerized simulation)

    NASA Technical Reports Server (NTRS)

    Seldner, K.

    1976-01-01

    The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.

  9. Using Microcomputers to Teach Non-Linear Equations at Sixth Form Level.

    ERIC Educational Resources Information Center

    Cheung, Y. L.

    1984-01-01

    Promotes the use of the microcomputer in mathematics instruction, reviewing approaches to teaching nonlinear equations. Examples of computer diagrams are illustrated and compared to textbook samples. An example of a problem-solving program is included. (ML)

  10. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  11. Multidimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMS) inverse filtering

    NASA Astrophysics Data System (ADS)

    Sapia, Mark Angelo

    2000-11-01

    Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).

  12. Development of a Tomography Technique for Assessment of the Material Condition of Concrete Using Optimized Elastic Wave Parameters.

    PubMed

    Chai, Hwa Kian; Liu, Kit Fook; Behnia, Arash; Yoshikazu, Kobayashi; Shiotani, Tomoki

    2016-04-16

    Concrete is the most ubiquitous construction material. Apart from the fresh and early age properties of concrete material, its condition during the structure life span affects the overall structural performance. Therefore, development of techniques such as non-destructive testing which enable the investigation of the material condition, are in great demand. Tomography technique has become an increasingly popular non-destructive evaluation technique for civil engineers to assess the condition of concrete structures. In the present study, this technique is investigated by developing reconstruction procedures utilizing different parameters of elastic waves, namely the travel time, wave amplitude, wave frequency, and Q-value. In the development of algorithms, a ray tracing feature was adopted to take into account the actual non-linear propagation of elastic waves in concrete containing defects. Numerical simulation accompanied by experimental verifications of wave motion were conducted to obtain wave propagation profiles in concrete containing honeycomb as a defect and in assessing the tendon duct filling of pre-stressed concrete (PC) elements. The detection of defects by the developed tomography reconstruction procedures was evaluated and discussed.

  13. Aircraft model prototypes which have specified handling-quality time histories

    NASA Technical Reports Server (NTRS)

    Johnson, S. H.

    1976-01-01

    Several techniques for obtaining linear constant-coefficient airplane models from specified handling-quality time histories are discussed. One technique, the pseudodata method, solves the basic problem, yields specified eigenvalues, and accommodates state-variable transfer-function zero suppression. The method is fully illustrated for a fourth-order stability-axis small-motion model with three lateral handling-quality time histories specified. The FORTRAN program which obtains and verifies the model is included and fully documented.

  14. Benchmarking in Universities: League Tables Revisited

    ERIC Educational Resources Information Center

    Turner, David

    2005-01-01

    This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to…

  15. Application of X-Ray Computer Tomography for Observing the Central Void Formations and the Fuel Pin Deformations of Irradiated FBR Fuel Assemblies

    NASA Astrophysics Data System (ADS)

    Katsuyama, Kozo; Nagamine, Tsuyoshi; Furuya, Hirotaka

    2010-10-01

    In order to observe the structural change in the interior of irradiated fuel assemblies, a non-destructive post-irradiation examination (PIE) technique using X-ray computer tomography (X-ray CT) was developed. This X-ray CT technique was applied to observe the central void formations and fuel pin deformations of fuel assemblies which had been irradiated at high linear heat rating. The central void sizes in all fuel pins were measured on five cross sections of the core fuel column as a parameter for evaluating fuel thermal performance. In addition, the fuel pin deformations were analyzed from X-ray CT images obtained along the axial direction of a fuel assembly at the same separation interval. A dependence of void size on the linear heat rating was seen in the fuel assembly irradiated at high linear heat rating. In addition, significant undulations of the fuel pin were observed along the axial direction, coinciding with the wrapping wire pitch in the core fuel column. Application of the developed technique should provide enhanced resolution of measurements and simplify fuel PIEs.

  16. Non-linear programming in shakedown analysis with plasticity and friction

    NASA Astrophysics Data System (ADS)

    Spagnoli, A.; Terzano, M.; Barber, J. R.; Klarbring, A.

    2017-07-01

    Complete frictional contacts, when subjected to cyclic loading, may sometimes develop a favourable situation where slip ceases after a few cycles, an occurrence commonly known as frictional shakedown. Its resemblance to shakedown in plasticity has prompted scholars to apply direct methods, derived from the classical theorems of limit analysis, in order to assess a safe limit to the external loads applied on the system. In circumstances where zones of plastic deformation develop in the material (e.g., because of the large stress concentrations near the sharp edges of a complete contact), it is reasonable to expect an effect of mutual interaction of frictional slip and plastic strains on the load limit below which the global behaviour is non dissipative, i.e., both slip and plastic strains go to zero after some dissipative load cycles. In this paper, shakedown of general two-dimensional discrete systems, involving both friction and plasticity, is discussed and the shakedown limit load is calculated using a non-linear programming algorithm based on the static theorem of limit analysis. An illustrative example related to an elastic-plastic solid containing a frictional crack is provided.

  17. Informal Evaluation.

    ERIC Educational Resources Information Center

    Engel, Brenda S.

    Intended for non-experts in evaluative techniques, this monograph presents suggestions and examples for assessing: (1) the child; (2) the classroom; and (3) the program or the school. Illustrative techniques of recordkeeping are presented. Methods of collecting data include documentation and formal records. Techniques to be used during evaluation…

  18. Helicopter Controllability

    DTIC Science & Technology

    1989-09-01

    106 3. Program CC Systems Technology, Inc. (STI) of Hawthorne, CA., develops and markets PC control system analysis and design software including...is marketed in Palo Alto, Ca., by Applied i and can be used for both linear and non- linear control system analysis. Using TUTSIM involves developing...gravity centroid ( ucg ) can be calculated as 112 n m pi - 2 zi acg n i (7-5) where pi = poles zi = zeroes n = number of poles m = number of zeroes If K

  19. SAS macro programs for geographically weighted generalized linear modeling with spatial point data: applications to health research.

    PubMed

    Chen, Vivian Yi-Ju; Yang, Tse-Chuan

    2012-08-01

    An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  20. H∞ control for uncertain linear system over networks with Bernoulli data dropout and actuator saturation.

    PubMed

    Yu, Jimin; Yang, Chenchen; Tang, Xiaoming; Wang, Ping

    2018-03-01

    This paper investigates the H ∞ control problems for uncertain linear system over networks with random communication data dropout and actuator saturation. The random data dropout process is modeled by a Bernoulli distributed white sequence with a known conditional probability distribution and the actuator saturation is confined in a convex hull by introducing a group of auxiliary matrices. By constructing a quadratic Lyapunov function, effective conditions for the state feedback-based H ∞ controller and the observer-based H ∞ controller are proposed in the form of non-convex matrix inequalities to take the random data dropout and actuator saturation into consideration simultaneously, and the problem of non-convex feasibility is solved by applying cone complementarity linearization (CCL) procedure. Finally, two simulation examples are given to demonstrate the effectiveness of the proposed new design techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Elastic properties and optical absorption studies of mixed alkali borogermanate glasses

    NASA Astrophysics Data System (ADS)

    Taqiullah, S. M.; Ahmmad, Shaik Kareem; Samee, M. A.; Rahman, Syed

    2018-05-01

    First time the mixed alkali effect (MAE) has been investigated in the glass system xNa2O-(30-x)Li2O-40B2O3- 30GeO2 (0≤x≤30 mol%) through density and optical absorption studies. The present glasses were prepared by melt quench technique. The density of the present glasses varies non-linearly exhibiting mixed alkali effect. Using the density data, the elastic moduli namely Young's modulus, bulk and shear modulus show strong linear dependence as a function of compositional parameter. From the absorption edge studies, the values of optical band gap energies for all transitions have been evaluated. It was established that the type of electronic transition in the present glass system is indirect allowed. The indirect optical band gap exhibit non-linear behavior with compositional parameter showing the mixed alkali effect.

  2. Experimental and numerical analysis of pre-compressed masonry walls in two-way-bending with second order effects

    NASA Astrophysics Data System (ADS)

    Milani, Gabriele; Olivito, Renato S.; Tralli, Antonio

    2014-10-01

    The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim of both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet's mechanism. The results obtained are compared with those provided by the numerical model.

  3. Improved Method for Linear B-Cell Epitope Prediction Using Antigen’s Primary Sequence

    PubMed Central

    Raghava, Gajendra P. S.

    2013-01-01

    One of the major challenges in designing a peptide-based vaccine is the identification of antigenic regions in an antigen that can stimulate B-cell’s response, also called B-cell epitopes. In the past, several methods have been developed for the prediction of conformational and linear (or continuous) B-cell epitopes. However, the existing methods for predicting linear B-cell epitopes are far from perfection. In this study, an attempt has been made to develop an improved method for predicting linear B-cell epitopes. We have retrieved experimentally validated B-cell epitopes as well as non B-cell epitopes from Immune Epitope Database and derived two types of datasets called Lbtope_Variable and Lbtope_Fixed length datasets. The Lbtope_Variable dataset contains 14876 B-cell epitope and 23321 non-epitopes of variable length where as Lbtope_Fixed length dataset contains 12063 B-cell epitopes and 20589 non-epitopes of fixed length. We also evaluated the performance of models on above datasets after removing highly identical peptides from the datasets. In addition, we have derived third dataset Lbtope_Confirm having 1042 epitopes and 1795 non-epitopes where each epitope or non-epitope has been experimentally validated in at least two studies. A number of models have been developed to discriminate epitopes and non-epitopes using different machine-learning techniques like Support Vector Machine, and K-Nearest Neighbor. We achieved accuracy from ∼54% to 86% using diverse s features like binary profile, dipeptide composition, AAP (amino acid pair) profile. In this study, for the first time experimentally validated non B-cell epitopes have been used for developing method for predicting linear B-cell epitopes. In previous studies, random peptides have been used as non B-cell epitopes. In order to provide service to scientific community, a web server LBtope has been developed for predicting and designing B-cell epitopes (http://crdd.osdd.net/raghava/lbtope/). PMID:23667458

  4. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less

  5. Global strength assessment in oblique waves of a large gas carrier ship, based on a non-linear iterative method

    NASA Astrophysics Data System (ADS)

    Domnisoru, L.; Modiga, A.; Gasparotti, C.

    2016-08-01

    At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.

  6. Comprehensive analysis of heat transfer of gold-blood nanofluid (Sisko-model) with thermal radiation

    NASA Astrophysics Data System (ADS)

    Eid, Mohamed R.; Alsaedi, Ahmed; Muhammad, Taseer; Hayat, Tasawar

    Characteristics of heat transfer of gold nanoparticles (Au-NPs) in flow past a power-law stretching surface are discussed. Sisko bio-nanofluid flow (with blood as a base fluid) in existence of non-linear thermal radiation is studied. The resulting equations system is abbreviated to model the suggested problem in non-linear PDEs. Along with initial and boundary-conditions, the equations are made non-dimensional and then resolved numerically utilizing 4th-5th order Runge-Kutta-Fehlberg (RKF45) technique with shooting integration procedure. Various flow quantities behaviors are examined for parametric consideration such as the Au-NPs volume fraction, the exponentially stretching and thermal radiation parameters. It is observed that radiation drives to shortage the thermal boundary-layer thickness and therefore resulted in better heat transfer at surface.

  7. Application of a sensitivity analysis technique to high-order digital flight control systems

    NASA Technical Reports Server (NTRS)

    Paduano, James D.; Downing, David R.

    1987-01-01

    A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.

  8. "NONLINEAR DYNAMIC SYSTEMS RESPONSE TO NON-STATIONARY EXCITATION USING THE WAVELET TRANSFORM"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SPANOS, POL D.

    2006-01-15

    The objective of this research project has been the development of techniques for estimating the power spectra of stochastic processes using wavelet transform, and the development of related techniques for determining the response of linear/nonlinear systems to excitations which are described via the wavelet transform. Both of the objectives have been achieved, and the research findings have been disseminated in papers in archival journals and technical conferences.

  9. On Non-Linear Sensitivity of Marine Biological Models to Parameter Variations

    DTIC Science & Technology

    2007-01-01

    M.B., 2002. Understanding uncertain enviromental systems. In: Grasman, J., van Straten, G. (Eds.), Predictability and Nonlinear Modelling in Natural...model evaluations to compute sensitivity indices. Comput. Phys. Commun. 145, 280–297. Saltelli, A., Andres, T.H., Homma, T., 1993. Some new techniques

  10. A Message from Home: Findings from a Program for Non-Retarded, Low-Income Preschoolers.

    ERIC Educational Resources Information Center

    Levenstein, Phyllis

    This document describes the Mother-Child Home Program (MCHP) for prevention of educational disadvantage, prepared by the Verbal Interaction Project. The MCHP consisted of 92 semi-weekly, half hour home sessions spread over two years by interviewers called "Toy Demonstrators". The latter were trained in non-didactic techniques to show a…

  11. Nonlinear imaging (NIM) of barely visible impact damage (BVID) in composite panels using a semi and full air-coupled linear and nonlinear ultrasound technique

    NASA Astrophysics Data System (ADS)

    Malfense Fierro, Gian Piero; Meo, Michele

    2018-03-01

    Two non-contact methods were evaluated to address the reliability and reproducibility concerns affecting industry adoption of nonlinear ultrasound techniques for non-destructive testing and evaluation (NDT/E) purposes. A semi and a fully air-coupled linear and nonlinear ultrasound method was evaluated by testing for barely visible impact damage (BVID) in composite materials. Air coupled systems provide various advantages over contact driven systems; such as: ease of inspection, no contact and lubrication issues and a great potential for non-uniform geometry evaluation. The semi air-coupled setup used a suction attached piezoelectric transducer to excite the sample and an array of low-cost microphones to capture the signal over the inspection area, while the second method focused on a purely air-coupled setup, using an air-coupled transducer to excite the structure and capture the signal. One of the issues facing nonlinear and any air-coupled systems is transferring enough energy to stimulate wave propagation and in the case of nonlinear ultrasound; damage regions. Results for both methods provided nonlinear imaging (NIM) of damage regions using a sweep excitation methodology, with the semi aircoupled system providing clearer results.

  12. Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems

    NASA Technical Reports Server (NTRS)

    Cerro, J. A.; Scotti, S. J.

    1991-01-01

    Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.

  13. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  14. Optical nonlinearity in gelatin layer film containing Au nanoparticles

    NASA Astrophysics Data System (ADS)

    Hirose, Tomohiro; Arisawa, Michiko; Omatsu, Takashige; Kuge, Ken'ichi; Hasegawa, Akira; Tateda, Mitsuhiro

    2002-09-01

    We demonstrate a novel technique to fabricate a gelatin film containing Au-nano-particles. The technique is based on silver halide photographic development. We investigated third-order non-linearity of the film by forward-four-wave-mixing technique. Peak absorption appeared at the wavelength of 560nm. Self-diffraction by the use of third order nonlinear grating formed by intense pico-second pulses was observed. Experimental diffraction efficiency was proportional to the square of the pump intensity. Third-order susceptibility c(3) of the film was estimated to be 1.8?~10^-7esu.

  15. Background correction in forensic photography. I. Photography of blood under conditions of non-uniform illumination or variable substrate color--theoretical aspects and proof of concept.

    PubMed

    Wagner, John H; Miskelly, Gordon M

    2003-05-01

    The combination of photographs taken at two or three wavelengths at and bracketing an absorbance peak indicative of a particular compound can lead to an image with enhanced visualization of the compound. This procedure works best for compounds with absorbance bands that are narrow compared with "average" chromophores. If necessary, the photographs can be taken with different exposure times to ensure that sufficient light from the substrate is detected at all three wavelengths. The combination of images is readily performed if the images are obtained with a digital camera and are then processed using an image processing program. Best results are obtained if linear images at the peak maximum, at a slightly shorter wavelength, and at a slightly longer wavelength are used. However, acceptable results can also be obtained under many conditions if non-linear photographs are used or if only two wavelengths (one of which is at the peak maximum) are combined. These latter conditions are more achievable by many "mid-range" digital cameras. Wavelength selection can either be by controlling the illumination (e.g., by using an alternate light source) or by use of narrow bandpass filters. The technique is illustrated using blood as the target analyte, using bands of light centered at 395, 415, and 435 nm. The extension of the method to detection of blood by fluorescence quenching is also described.

  16. Comparative Performance Evaluation of Rainfall-runoff Models, Six of Black-box Type and One of Conceptual Type, From The Galway Flow Forecasting System (gffs) Package, Applied On Two Irish Catchments

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.

    The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.

  17. A virtual reality system for arm and hand rehabilitation

    NASA Astrophysics Data System (ADS)

    Luo, Zhiqiang; Lim, Chee Kian; Chen, I.-Ming; Yeo, Song Huat

    2011-03-01

    This paper presents a virtual reality (VR) system for upper limb rehabilitation. The system incorporates two motion track components, the Arm Suit and the Smart Glove which are composed of a range of the optical linear encoders (OLE) and the inertial measurement units (IMU), and two interactive practice applications designed for driving users to perform the required functional and non-functional motor recovery tasks. We describe the technique details about the two motion track components and the rational to design two practice applications. The experiment results show that, compared with the marker-based tracking system, the Arm Suit can accurately track the elbow and wrist positions. The repeatability of the Smart Glove on measuring the five fingers' movement can be satisfied. Given the low cost, high accuracy and easy installation, the system thus promises to be a valuable complement to conventional therapeutic programs offered in rehabilitation clinics and at home.

  18. MIDACO on MINLP space applications

    NASA Astrophysics Data System (ADS)

    Schlueter, Martin; Erb, Sven O.; Gerdts, Matthias; Kemble, Stephen; Rückmann, Jan-J.

    2013-04-01

    A numerical study on two challenging mixed-integer non-linear programming (MINLP) space applications and their optimization with MIDACO, a recently developed general purpose optimization software, is presented. These applications are the optimal control of the ascent of a multiple-stage space launch vehicle and the space mission trajectory design from Earth to Jupiter using multiple gravity assists. Additionally, an NLP aerospace application, the optimal control of an F8 aircraft manoeuvre, is discussed and solved. In order to enhance the optimization performance of MIDACO a hybridization technique, coupling MIDACO with an SQP algorithm, is presented for two of these three applications. The numerical results show, that the applications can be solved to their best known solution (or even new best solution) in a reasonable time by the considered approach. Since using the concept of MINLP is still a novelty in the field of (aero)space engineering, the demonstrated capabilities are seen as very promising.

  19. H.264/AVC digital fingerprinting based on spatio-temporal just noticeable distortion

    NASA Astrophysics Data System (ADS)

    Ait Saadi, Karima; Bouridane, Ahmed; Guessoum, Abderrezak

    2014-01-01

    This paper presents a robust adaptive embedding scheme using a modified Spatio-Temporal noticeable distortion (JND) model that is designed for tracing the distribution of the H.264/AVC video content and protecting them from unauthorized redistribution. The Embedding process is performed during coding process in selected macroblocks type Intra 4x4 within I-Frame. The method uses spread-spectrum technique in order to obtain robustness against collusion attacks and the JND model to dynamically adjust the embedding strength and control the energy of the embedded fingerprints so as to ensure their imperceptibility. Linear and non linear collusion attacks are performed to show the robustness of the proposed technique against collusion attacks while maintaining visual quality unchanged.

  20. Serenity: A subsystem quantum chemistry program.

    PubMed

    Unsleber, Jan P; Dresselhaus, Thomas; Klahr, Kevin; Schnieders, David; Böckers, Michael; Barton, Dennis; Neugebauer, Johannes

    2018-05-15

    We present the new quantum chemistry program Serenity. It implements a wide variety of functionalities with a focus on subsystem methodology. The modular code structure in combination with publicly available external tools and particular design concepts ensures extensibility and robustness with a focus on the needs of a subsystem program. Several important features of the program are exemplified with sample calculations with subsystem density-functional theory, potential reconstruction techniques, a projection-based embedding approach and combinations thereof with geometry optimization, semi-numerical frequency calculations and linear-response time-dependent density-functional theory. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  1. Local numerical modelling of ultrasonic guided waves in linear and nonlinear media

    NASA Astrophysics Data System (ADS)

    Packo, Pawel; Radecki, Rafal; Kijanka, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.

    2017-04-01

    Nonlinear ultrasonic techniques provide improved damage sensitivity compared to linear approaches. The combination of attractive properties of guided waves, such as Lamb waves, with unique features of higher harmonic generation provides great potential for characterization of incipient damage, particularly in plate-like structures. Nonlinear ultrasonic structural health monitoring techniques use interrogation signals at frequencies other than the excitation frequency to detect changes in structural integrity. Signal processing techniques used in non-destructive evaluation are frequently supported by modeling and numerical simulations in order to facilitate problem solution. This paper discusses known and newly-developed local computational strategies for simulating elastic waves, and attempts characterization of their numerical properties in the context of linear and nonlinear media. A hybrid numerical approach combining advantages of the Local Interaction Simulation Approach (LISA) and Cellular Automata for Elastodynamics (CAFE) is proposed for unique treatment of arbitrary strain-stress relations. The iteration equations of the method are derived directly from physical principles employing stress and displacement continuity, leading to an accurate description of the propagation in arbitrarily complex media. Numerical analysis of guided wave propagation, based on the newly developed hybrid approach, is presented and discussed in the paper for linear and nonlinear media. Comparisons to Finite Elements (FE) are also discussed.

  2. Structural performance analysis and redesign

    NASA Technical Reports Server (NTRS)

    Whetstone, W. D.

    1978-01-01

    Program performs stress buckling and vibrational analysis of large, linear, finite-element systems in excess of 50,000 degrees of freedom. Cost, execution time, and storage requirements are kept reasonable through use of sparse matrix solution techniques, and other computational and data management procedures designed for problems of very large size.

  3. Estimating School Efficiency: A Comparison of Methods Using Simulated Data.

    ERIC Educational Resources Information Center

    Bifulco, Robert; Bretschneider, Stuart

    2001-01-01

    Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…

  4. Linear regression analysis of survival data with missing censoring indicators.

    PubMed

    Wang, Qihua; Dinse, Gregg E

    2011-04-01

    Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.

  5. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  6. The Routine Fitting of Kinetic Data to Models

    PubMed Central

    Berman, Mones; Shahn, Ezra; Weiss, Marjory F.

    1962-01-01

    A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975

  7. Accelerated Microstructure Imaging via Convex Optimization (AMICO) from diffusion MRI data.

    PubMed

    Daducci, Alessandro; Canales-Rodríguez, Erick J; Zhang, Hui; Dyrby, Tim B; Alexander, Daniel C; Thiran, Jean-Philippe

    2015-01-15

    Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Fast secant methods for the iterative solution of large nonsymmetric linear systems

    NASA Technical Reports Server (NTRS)

    Deuflhard, Peter; Freund, Roland; Walter, Artur

    1990-01-01

    A family of secant methods based on general rank-1 updates was revisited in view of the construction of iterative solvers for large non-Hermitian linear systems. As it turns out, both Broyden's good and bad update techniques play a special role, but should be associated with two different line search principles. For Broyden's bad update technique, a minimum residual principle is natural, thus making it theoretically comparable with a series of well known algorithms like GMRES. Broyden's good update technique, however, is shown to be naturally linked with a minimum next correction principle, which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized partial differential equations of convection diffusion type in 2-D with integral layers give a first impression of the possible power of the derived good Broyden variant.

  9. Multifractality Signatures in Quasars Time Series. I. 3C 273

    NASA Astrophysics Data System (ADS)

    Belete, A. Bewketu; Bravo, J. P.; Canto Martins, B. L.; Leão, I. C.; De Araujo, J. M.; De Medeiros, J. R.

    2018-05-01

    The presence of multifractality in a time series shows different correlations for different time scales as well as intermittent behaviour that cannot be captured by a single scaling exponent. The identification of a multifractal nature allows for a characterization of the dynamics and of the intermittency of the fluctuations in non-linear and complex systems. In this study, we search for a possible multifractal structure (multifractality signature) of the flux variability in the quasar 3C 273 time series for all electromagnetic wavebands at different observation points, and the origins for the observed multifractality. This study is intended to highlight how the scaling behaves across the different bands of the selected candidate which can be used as an additional new technique to group quasars based on the fractal signature observed in their time series and determine whether quasars are non-linear physical systems or not. The Multifractal Detrended Moving Average algorithm (MFDMA) has been used to study the scaling in non-linear, complex and dynamic systems. To achieve this goal, we applied the backward (θ = 0) MFDMA method for one-dimensional signals. We observe weak multifractal (close to monofractal) behaviour in some of the time series of our candidate except in the mm, UV and X-ray bands. The non-linear temporal correlation is the main source of the observed multifractality in the time series whereas the heaviness of the distribution contributes less.

  10. Artificial Intelligence Techniques for Predicting and Mapping Daily Pan Evaporation

    NASA Astrophysics Data System (ADS)

    Arunkumar, R.; Jothiprakash, V.; Sharma, Kirty

    2017-09-01

    In this study, Artificial Intelligence techniques such as Artificial Neural Network (ANN), Model Tree (MT) and Genetic Programming (GP) are used to develop daily pan evaporation time-series (TS) prediction and cause-effect (CE) mapping models. Ten years of observed daily meteorological data such as maximum temperature, minimum temperature, relative humidity, sunshine hours, dew point temperature and pan evaporation are used for developing the models. For each technique, several models are developed by changing the number of inputs and other model parameters. The performance of each model is evaluated using standard statistical measures such as Mean Square Error, Mean Absolute Error, Normalized Mean Square Error and correlation coefficient (R). The results showed that daily TS-GP (4) model predicted better with a correlation coefficient of 0.959 than other TS models. Among various CE models, CE-ANN (6-10-1) resulted better than MT and GP models with a correlation coefficient of 0.881. Because of the complex non-linear inter-relationship among various meteorological variables, CE mapping models could not achieve the performance of TS models. From this study, it was found that GP performs better for recognizing single pattern (time series modelling), whereas ANN is better for modelling multiple patterns (cause-effect modelling) in the data.

  11. Efficient QoS-aware Service Composition

    NASA Astrophysics Data System (ADS)

    Alrifai, Mohammad; Risse, Thomas

    Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.

  12. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  13. Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-12-31

    Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that peoplemore » from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.« less

  14. Investigation of ODE integrators using interactive graphics. [Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Brown, R. L.

    1978-01-01

    Two FORTRAN programs using an interactive graphic terminal to generate accuracy and stability plots for given multistep ordinary differential equation (ODE) integrators are described. The first treats the fixed stepsize linear case with complex variable solutions, and generates plots to show accuracy and error response to step driving function of a numerical solution, as well as the linear stability region. The second generates an analog to the stability region for classes of non-linear ODE's as well as accuracy plots. Both systems can compute method coefficients from a simple specification of the method. Example plots are given.

  15. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  16. A Chip and Pixel Qualification Methodology on Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank

    2004-01-01

    This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.

  17. Analysis of Slope Limiters on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Berger, Marsha; Aftosmis, Michael J.

    2005-01-01

    This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. Many slope limiters in standard use do not preserve linear solutions on irregular grids impacting both accuracy and convergence. We rewrite some well-known limiters to highlight their underlying symmetry, and use this form to examine the proper - ties of both traditional and novel limiter formulations on non-uniform meshes. A consistent method of handling stretched meshes is developed which is both linearity preserving for arbitrary mesh stretchings and reduces to common limiters on uniform meshes. In multiple dimensions we analyze the monotonicity region of the gradient vector and show that the multidimensional limiting problem may be cast as the solution of a linear programming problem. For some special cases we present a new directional limiting formulation that preserves linear solutions in multiple dimensions on irregular grids. Computational results using model problems and complex three-dimensional examples are presented, demonstrating accuracy, monotonicity and robustness.

  18. Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms

    NASA Astrophysics Data System (ADS)

    Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.

    2017-09-01

    Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.

  19. Temporal Gain Correction for X-Ray Calorimeter Spectrometers

    NASA Technical Reports Server (NTRS)

    Porter, F. S.; Chiao, M. P.; Eckart, M. E.; Fujimoto, R.; Ishisaki, Y.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; McCammon, D.; Mitsuda, K.

    2016-01-01

    Calorimetric X-ray detectors are very sensitive to their environment. The boundary conditions can have a profound effect on the gain including heat sink temperature, the local radiation temperature, bias, and the temperature of the readout electronics. Any variation in the boundary conditions can cause temporal variations in the gain of the detector and compromise both the energy scale and the resolving power of the spectrometer. Most production X-ray calorimeter spectrometers, both on the ground and in space, have some means of tracking the gain as a function of time, often using a calibration spectral line. For small gain changes, a linear stretch correction is often sufficient. However, the detectors are intrinsically non-linear and often the event analysis, i.e., shaping, optimal filters etc., add additional non-linearity. Thus for large gain variations or when the best possible precision is required, a linear stretch correction is not sufficient. Here, we discuss a new correction technique based on non-linear interpolation of the energy-scale functions. Using Astro-HSXS calibration data, we demonstrate that the correction can recover the X-ray energy to better than 1 part in 104 over the entire spectral band to above 12 keV even for large-scale gain variations. This method will be used to correct any temporal drift of the on-orbit per-pixel gain using on-board calibration sources for the SXS instrument on the Astro-H observatory.

  20. Time history solution program, L225 (TEV126). Volume 1: Engineering and usage

    NASA Technical Reports Server (NTRS)

    Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.

    1979-01-01

    Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.

  1. Flight instrumentation specification for parameter identification: Program user's guide. [instrument errors/error analysis

    NASA Technical Reports Server (NTRS)

    Mohr, R. L.

    1975-01-01

    A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.

  2. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  3. An Examination of the Utility of Non-Linear Dynamics Techniques for Analyzing Human Information Behaviors.

    ERIC Educational Resources Information Center

    Snyder, Herbert; Kurtze, Douglas

    1992-01-01

    Discusses the use of chaos, or nonlinear dynamics, for investigating computer-mediated communication. A comparison between real, human-generated data from a computer network and similarly constructed random-generated data is made, and mathematical procedures for determining chaos are described. (seven references) (LRW)

  4. Stability and Control CFD Investigations of a Generic 53 Degree Swept UCAV Configuration

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.

    2014-01-01

    NATO STO Task Group AVT-201 on "Extended Assessment of Reliable Stability & Control Prediction Methods for NATO Air Vehicles" is studying various computational approaches to predict stability and control parameters for aircraft undergoing non-linear flight conditions. This paper contributes an assessment through correlations with wind tunnel data for the state of aerodynamic predictive capability of time-accurate RANS methodology on the group's focus configuration, a 53deg swept and twisted lambda wing UCAV, undergoing a variety of roll, pitch, and yaw motions. The vehicle aerodynamics is dominated by the complex non-linear physics of round leading-edge vortex flow separation. Correlations with experimental data are made for static longitudinal/lateral sweeps, and at varying frequencies of prescribed roll/pitch/yaw sinusoidal motion for the vehicle operating with and without control surfaces. The data and the derived understanding should prove useful to the AVT-201 team and other researchers who are developing techniques for augmenting flight simulation models from low-speed CFD predictions of aircraft traversing non-linear regions of a flight envelope.

  5. Structural characterization and observation of variable range hopping conduction mechanism at high temperature in CdSe quantum dot solids

    NASA Astrophysics Data System (ADS)

    Sinha, Subhojyoti; Kumar Chatterjee, Sanat; Ghosh, Jiten; Kumar Meikap, Ajit

    2013-03-01

    We have used Rietveld refinement technique to extract the microstructural parameters of thioglycolic acid capped CdSe quantum dots. The quantum dot formation and its efficient capping are further confirmed by HR-TEM, UV-visible and FT-IR spectroscopy. Comparative study of the variation of dc conductivity with temperature (298 K ≤ T ≤ 460 K) is given considering Arrhenius formalism, small polaron hopping and Schnakenberg model. We observe that only Schnakenberg model provides good fit to the non-linear region of the variation of dc conductivity with temperature. Experimental variation of ac conductivity and dielectric parameters with temperature (298 K ≤ T ≤ 460 K) and frequency (80 Hz ≤ f ≤ 2 MHz) are discussed in the light of hopping theory and quantum confinement effect. We have elucidated the observed non-linearity in the I-V curves (measured within ±50 V), at dark and at ambient light, in view of tunneling mechanism. Tunnel exponents and non-linearity weight factors have also been evaluated in this regard.

  6. Methods, Software and Tools for Three Numerical Applications. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E. R. Jessup

    2000-03-01

    This is a report of the results of the authors work supported by DOE contract DE-FG03-97ER25325. They proposed to study three numerical problems. They are: (1) the extension of the PMESC parallel programming library; (2) the development of algorithms and software for certain generalized eigenvalue and singular value (SVD) problems, and (3) the application of techniques of linear algebra to an information retrieval technique known as latent semantic indexing (LSI).

  7. Status of the Monte Carlo library least-squares (MCLLS) approach for non-linear radiation analyzer problems

    NASA Astrophysics Data System (ADS)

    Gardner, Robin P.; Xu, Libai

    2009-10-01

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.

  8. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  9. A novel post-processing scheme for two-dimensional electrical impedance tomography based on artificial neural networks

    PubMed Central

    2017-01-01

    Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856

  10. A diffuse-interface method for two-phase flows with soluble surfactants

    PubMed Central

    Teigen, Knut Erik; Song, Peng; Lowengrub, John; Voigt, Axel

    2010-01-01

    A method is presented to solve two-phase problems involving soluble surfactants. The incompressible Navier–Stokes equations are solved along with equations for the bulk and interfacial surfactant concentrations. A non-linear equation of state is used to relate the surface tension to the interfacial surfactant concentration. The method is based on the use of a diffuse interface, which allows a simple implementation using standard finite difference or finite element techniques. Here, finite difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Results are presented for a drop in shear flow in both 2D and 3D, and the effect of solubility is discussed. PMID:21218125

  11. Two-dimensional microsphere quasi-crystal: fabrication and properties

    NASA Astrophysics Data System (ADS)

    Noginova, Natalia E.; Venkateswarlu, Putcha; Kukhtarev, Nickolai V.; Sarkisov, Sergey S.; Noginov, Mikhail A.; Caulfield, H. John; Curley, Michael J.

    1996-11-01

    2D quasi-crystals were fabricated from polystyrene microspheres and characterized for their structural, diffraction, and non-linear optics properties. The quasi- crystals were produced with the method based on Langmuir- Blodgett thin film technique. Illuminating the crystal with the laser beam, we observed the diffraction pattern in the direction of the beam propagation and in the direction of the back scattering, similar to the x-ray Laue pattern observed in regular crystals with hexagonal structure. The absorption spectrum of the quasi-crystal demonstrated two series of regular maxima and minima, with the spacing inversely proportional to the microspheres diameter. Illumination of the dye-doped microspheres crystal with Q- switched radiation of Nd:YAG laser showed the enhancement of non-linear properties, in particular, second harmonic generation.

  12. Application of laser scanning technique in earthquake protection of Istanbul's historical heritage buildings

    NASA Astrophysics Data System (ADS)

    Çaktı, Eser; Ercan, Tülay; Dar, Emrullah

    2017-04-01

    Istanbul's vast historical and cultural heritage is under constant threat of earthquakes. Historical records report repeated damages to the city's landmark buildings. Our efforts towards earthquake protection of several buildings in Istanbul involve earthquake monitoring via structural health monitoring systems, linear and non-linear structural modelling and analysis in search of past and future earthquake performance, shake-table testing of scaled models and non-destructive testing. More recently we have been using laser technology in monitoring structural deformations and damage in five monumental buildings which are Hagia Sophia Museum and Fatih, Sultanahmet, Süleymaniye and Mihrimah Sultan Mosques. This presentation is about these efforts with special emphasis on the use of laser scanning in monitoring of edifices.

  13. A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.

    PubMed

    Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa

    2018-02-01

    Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.

  14. Non-linear aeroelastic prediction for aircraft applications

    NASA Astrophysics Data System (ADS)

    de C. Henshaw, M. J.; Badcock, K. J.; Vio, G. A.; Allen, C. B.; Chamberlain, J.; Kaynes, I.; Dimitriadis, G.; Cooper, J. E.; Woodgate, M. A.; Rampurawala, A. M.; Jones, D.; Fenwick, C.; Gaitonde, A. L.; Taylor, N. V.; Amor, D. S.; Eccles, T. A.; Denley, C. J.

    2007-05-01

    Current industrial practice for the prediction and analysis of flutter relies heavily on linear methods and this has led to overly conservative design and envelope restrictions for aircraft. Although the methods have served the industry well, it is clear that for a number of reasons the inclusion of non-linearity in the mathematical and computational aeroelastic prediction tools is highly desirable. The increase in available and affordable computational resources, together with major advances in algorithms, mean that non-linear aeroelastic tools are now viable within the aircraft design and qualification environment. The Partnership for Unsteady Methods in Aerodynamics (PUMA) Defence and Aerospace Research Partnership (DARP) was sponsored in 2002 to conduct research into non-linear aeroelastic prediction methods and an academic, industry, and government consortium collaborated to address the following objectives: To develop useable methodologies to model and predict non-linear aeroelastic behaviour of complete aircraft. To evaluate the methodologies on real aircraft problems. To investigate the effect of non-linearities on aeroelastic behaviour and to determine which have the greatest effect on the flutter qualification process. These aims have been very effectively met during the course of the programme and the research outputs include: New methods available to industry for use in the flutter prediction process, together with the appropriate coaching of industry engineers. Interesting results in both linear and non-linear aeroelastics, with comprehensive comparison of methods and approaches for challenging problems. Additional embryonic techniques that, with further research, will further improve aeroelastics capability. This paper describes the methods that have been developed and how they are deployable within the industrial environment. We present a thorough review of the PUMA aeroelastics programme together with a comprehensive review of the relevant research in this domain. This is set within the context of a generic industrial process and the requirements of UK and US aeroelastic qualification. A range of test cases, from simple small DOF cases to full aircraft, have been used to evaluate and validate the non-linear methods developed and to make comparison with the linear methods in everyday use. These have focused mainly on aerodynamic non-linearity, although some results for structural non-linearity are also presented. The challenges associated with time domain (coupled computational fluid dynamics-computational structural model (CFD-CSM)) methods have been addressed through the development of grid movement, fluid-structure coupling, and control surface movement technologies. Conclusions regarding the accuracy and computational cost of these are presented. The computational cost of time-domain methods, despite substantial improvements in efficiency, remains high. However, significant advances have been made in reduced order methods, that allow non-linear behaviour to be modelled, but at a cost comparable with that of the regular linear methods. Of particular note is a method based on Hopf bifurcation that has reached an appropriate maturity for deployment on real aircraft configurations, though only limited results are presented herein. Results are also presented for dynamically linearised CFD approaches that hold out the possibility of non-linear results at a fraction of the cost of time coupled CFD-CSM methods. Local linearisation approaches (higher order harmonic balance and continuation method) are also presented; these have the advantage that no prior assumption of the nature of the aeroelastic instability is required, but currently these methods are limited to low DOF problems and it is thought that these will not reach a level of maturity appropriate to real aircraft problems for some years to come. Nevertheless, guidance on the most likely approaches has been derived and this forms the basis for ongoing research. It is important to recognise that the aeroelastic design and qualification requires a variety of methods applicable at different stages of the process. The methods reported herein are mapped to the process, so that their applicability and complementarity may be understood. Overall, the programme has provided a suite of methods that allow realistic consideration of non-linearity in the aeroelastic design and qualification of aircraft. Deployment of these methods is underway in the industrial environment, but full realisation of the benefit of these approaches will require appropriate engagement with the standards community so that safety standards may take proper account of the inclusion of non-linearity.

  15. Control of plasma process by use of harmonic frequency components of voltage and current

    DOEpatents

    Miller, Paul A.; Kamon, Mattan

    1994-01-01

    The present invention provides for a technique for taking advantage of the intrinsic electrical non-linearity of processing plasmas to add additional control variables that affect process performance. The technique provides for the adjustment of the electrical coupling circuitry, as well as the electrical excitation level, in response to measurements of the reactor voltage and current and to use that capability to modify the plasma characteristics to obtain the desired performance.

  16. Bayesian Approach to the Joint Inversion of Gravity and Magnetic Data, with Application to the Ismenius Area of Mars

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.

    2004-01-01

    This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov

  17. Synthesis of Silane and Silicon in a Non-equilibrium Plasma Jet

    NASA Technical Reports Server (NTRS)

    Calcote, H. F.

    1978-01-01

    The original objective of this program was to determine the feasibility of high volume, low-cost production of high purity silane or solar cell grade silicon using a non equilibrium plasma jet. The emphasis was changed near the end of the program to determine the feasibility of preparing photovoltaic amorphous silicon films directly using this method. The non equilibrium plasma jet should be further evaluated as a technique for producing high efficiency photovoltaic amorphous silicon films.

  18. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 2

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Nachtigal, Noel M.

    1990-01-01

    It is shown how the look-ahead Lanczos process (combined with a quasi-minimal residual QMR) approach) can be used to develop a robust black box solver for large sparse non-Hermitian linear systems. Details of an implementation of the resulting QMR algorithm are presented. It is demonstrated that the QMR method is closely related to the biconjugate gradient (BCG) algorithm; however, unlike BCG, the QMR algorithm has smooth convergence curves and good numerical properties. We report numerical experiments with our implementation of the look-ahead Lanczos algorithm, both for eigenvalue problem and linear systems. Also, program listings of FORTRAN implementations of the look-ahead algorithm and the QMR method are included.

  19. The design and analysis of simple low speed flap systems with the aid of linearized theory computer programs

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.

    1985-01-01

    The purpose here is to show how two linearized theory computer programs in combination may be used for the design of low speed wing flap systems capable of high levels of aerodynamic efficiency. A fundamental premise of the study is that high levels of aerodynamic performance for flap systems can be achieved only if the flow about the wing remains predominantly attached. Based on this premise, a wing design program is used to provide idealized attached flow camber surfaces from which candidate flap systems may be derived, and, in a following step, a wing evaluation program is used to provide estimates of the aerodynamic performance of the candidate systems. Design strategies and techniques that may be employed are illustrated through a series of examples. Applicability of the numerical methods to the analysis of a representative flap system (although not a system designed by the process described here) is demonstrated in a comparison with experimental data.

  20. Interactive algebraic grid-generation technique

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Wiese, M. R.

    1986-01-01

    An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.

  1. Dynamic output feedback control of a flexible air-breathing hypersonic vehicle via T-S fuzzy approach

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoxiang; Wu, Ligang; Hu, Changhua; Wang, Zhaoqiang; Gao, Huijun

    2014-08-01

    By utilising Takagi-Sugeno (T-S) fuzzy set approach, this paper addresses the robust H∞ dynamic output feedback control for the non-linear longitudinal model of flexible air-breathing hypersonic vehicles (FAHVs). The flight control of FAHVs is highly challenging due to the unique dynamic characteristics, and the intricate couplings between the engine and fight dynamics and external disturbance. Because of the dynamics' enormous complexity, currently, only the longitudinal dynamics models of FAHVs have been used for controller design. In this work, T-S fuzzy modelling technique is utilised to approach the non-linear dynamics of FAHVs, then a fuzzy model is developed for the output tracking problem of FAHVs. The fuzzy model contains parameter uncertainties and disturbance, which can approach the non-linear dynamics of FAHVs more exactly. The flexible models of FAHVs are difficult to measure because of the complex dynamics and the strong couplings, thus a full-order dynamic output feedback controller is designed for the fuzzy model. A robust H∞ controller is designed for the obtained closed-loop system. By utilising the Lyapunov functional approach, sufficient solvability conditions for such controllers are established in terms of linear matrix inequalities. Finally, the effectiveness of the proposed T-S fuzzy dynamic output feedback control method is demonstrated by numerical simulations.

  2. Probabilistic Structural Analysis Program

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.

    2010-01-01

    NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.

  3. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  4. Chemical reactions simulated by ground-water-quality models

    USGS Publications Warehouse

    Grove, David B.; Stollenwerk, Kenneth G.

    1987-01-01

    Recent literature concerning the modeling of chemical reactions during transport in ground water is examined with emphasis on sorption reactions. The theory of transport and reactions in porous media has been well documented. Numerous equations have been developed from this theory, to provide both continuous and sequential or multistep models, with the water phase considered for both mobile and immobile phases. Chemical reactions can be either equilibrium or non-equilibrium, and can be quantified in linear or non-linear mathematical forms. Non-equilibrium reactions can be separated into kinetic and diffusional rate-limiting mechanisms. Solutions to the equations are available by either analytical expressions or numerical techniques. Saturated and unsaturated batch, column, and field studies are discussed with one-dimensional, laboratory-column experiments predominating. A summary table is presented that references the various kinds of models studied and their applications in predicting chemical concentrations in ground waters.

  5. An electron beam linear scanning mode for industrial limited-angle nano-computed tomography.

    PubMed

    Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng

    2018-01-01

    Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ 0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.

  6. An electron beam linear scanning mode for industrial limited-angle nano-computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng

    2018-01-01

    Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.

  7. Optimal design of neural stimulation current waveforms.

    PubMed

    Halpern, Mark

    2009-01-01

    This paper contains results on the design of electrical signals for delivering charge through electrodes to achieve neural stimulation. A generalization of the usual constant current stimulation phase to a stepped current waveform is presented. The electrode current design is then formulated as the calculation of the current step sizes to minimize the peak electrode voltage while delivering a specified charge in a given number of time steps. This design problem can be formulated as a finite linear program, or alternatively by using techniques for discrete-time linear system design.

  8. Numerical treatment for Carreau nanofluid flow over a porous nonlinear stretching surface

    NASA Astrophysics Data System (ADS)

    Eid, Mohamed R.; Mahny, Kasseb L.; Muhammad, Taseer; Sheikholeslami, Mohsen

    2018-03-01

    The impact of magnetic field and nanoparticles on the two-phase flow of a generalized non-Newtonian Carreau fluid over permeable non-linearly stretching surface has been analyzed in the existence of all suction/injection and thermal radiation. The governing PDEs with congruous boundary condition are transformed into a system of non-linear ODEs with appropriate boundary conditions by using similarity transformation. It solved numerically by using 4th-5th order Runge-Kutta-Fehlberg method based on shooting technique. The impacts of non-dimensional controlling parameters on velocity, temperature, and nanoparticles volume concentration profiles are scrutinized with aid of graphs. The Nusselt and the Sherwood numbers are studied at the different situations of the governing parameters. The numerical computations are in excellent consent with previously reported studies. It is found that the heat transfer rate is reduced with an increment of thermal radiation parameter and on contrary of the rising of magnetic field. The opposite trend happens in the mass transfer rate.

  9. Linear and Nonlinear Optical Properties of Spherical Quantum Dots: Effects of Hydrogenic Impurity and Conduction Band Non-Parabolicity

    NASA Astrophysics Data System (ADS)

    Rezaei, G.; Vaseghi, B.; Doostimotlagh, N. A.

    2012-03-01

    Simultaneous effects of an on-center hydrogenic impurity and band edge non-parabolicity on intersubband optical absorption coefficients and refractive index changes of a typical GaAs/AlxGa1-x As spherical quantum dot are theoretically investigated, using the Luttinger—Kohn effective mass equation. So, electronic structure and optical properties of the system are studied by means of the matrix diagonalization technique and compact density matrix approach, respectively. Finally, effects of an impurity, band edge non-parabolicity, incident light intensity and the dot size on the linear, the third-order nonlinear and the total optical absorption coefficients and refractive index changes are investigated. Our results indicate that, the magnitudes of these optical quantities increase and their peaks shift to higher energies as the influences of the impurity and the band edge non-parabolicity are considered. Moreover, incident light intensity and the dot size have considerable effects on the optical absorption coefficients and refractive index changes.

  10. High-order Newton-penalty algorithms

    NASA Astrophysics Data System (ADS)

    Dussault, Jean-Pierre

    2005-10-01

    Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.

  11. K[AsW{sub 2}O{sub 9}], the first member of the arsenate–tungsten bronze family: Synthesis, structure, spectroscopic and non-linear optical properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alekseev, Evgeny V., E-mail: e.alekseev@fz-juelich.de; Institut für Kristallographie, RWTH Aachen, Jägerstraße 17–19 D-52066 Aachen; Felbinger, Olivier

    K[AsW{sub 2}O{sub 9}], prepared by high-temperature solid-state reaction, is the first member of the arsenate–tungsten bronze family. The structure of K[AsW{sub 2}O{sub 9}] is based on a 3-dimensional (3D) oxotungstate–arsenate framework with the non-centrosymmetric P2{sub 1}2{sub 1}2{sub 1} space group, a=4.9747(3) Å, b=9.1780(8) Å, c=16.681(2) Å. The material was characterized using X-ray diffraction, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), Raman and infrared (IR) spectroscopic techniques. The results of DSC demonstrate that this phase is stable up to 1076 K. Second harmonic generation (SHG) measurements performed on a powder sample demonstrate noticeable (0.1 of LiIO{sub 3}) non-linear optical (NLO)more » activity. - Graphical abstract: K[AsW{sub 2}O{sub 9}], the first member of arsenate–tungsten bronze family exhibit new three dimensional structure type, significant thermal stability and NLO properties. Highlights: • K[AsW{sub 2}O{sub 9}], the first member of the arsenate–tungsten bronze family was synthesized with solid state reaction technique. • Structure of this phase was investigated with X-ray diffraction, IR and Raman spectroscopy and electron microscopy. • Thermal stability of the phase was determinate with DSC techniques. • NLO properties were investigated.« less

  12. The Woodworker's Website: A Project Management Case Study

    ERIC Educational Resources Information Center

    Jance, Marsha

    2014-01-01

    A case study that focuses on building a website for a woodworking business is discussed. Project management and linear programming techniques can be used to determine the time required to complete the website project discussed in the case. This case can be assigned to students in an undergraduate or graduate decision modeling or management science…

  13. Differential Mueller matrix polarimetry technique for non-invasive measurement of glucose concentration on human fingertip.

    PubMed

    Phan, Quoc-Hung; Lo, Yu-Lung

    2017-06-26

    A differential Mueller matrix polarimetry technique is proposed for obtaining non-invasive (NI) measurements of the glucose concentration on the human fingertip. The feasibility of the proposed method is demonstrated by detecting the optical rotation angle and depolarization index of tissue phantom samples containing de-ionized water (DI), glucose solutions with concentrations ranging from 0~500 mg/dL and 2% lipofundin. The results show that the extracted optical rotation angle increases linearly with an increasing glucose concentration, while the depolarization index decreases. The practical applicability of the proposed method is demonstrated by measuring the optical rotation angle and depolarization index properties of the human fingertips of healthy volunteers.

  14. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Levis, C. A.; Mount-Campbell, C.; Gonsalvez, D. J.; Wang, C. W.; Yamamura, Y.

    1985-01-01

    Computer-based techniques for optimizing communications-satellite orbit and frequency assignments are discussed. A gradient-search code was tested against a BSS scenario derived from the RARC-83 data. Improvement was obtained, but each iteration requires about 50 minutes of IBM-3081 CPU time. Gradient-search experiments on a small FSS test problem, consisting of a single service area served by 8 satellites, showed quickest convergence when the satellites were all initially placed near the center of the available orbital arc with moderate spacing. A transformation technique is proposed for investigating the surface topography of the objective function used in the gradient-search method. A new synthesis approach is based on transforming single-entry interference constraints into corresponding constraints on satellite spacings. These constraints are used with linear objective functions to formulate the co-channel orbital assignment task as a linear-programming (LP) problem or mixed integer programming (MIP) problem. Globally optimal solutions are always found with the MIP problems, but not necessarily with the LP problems. The MIP solutions can be used to evaluate the quality of the LP solutions. The initial results are very encouraging.

  15. The discovery of indicator variables for QSAR using inductive logic programming

    NASA Astrophysics Data System (ADS)

    King, Ross D.; Srinivasan, Ashwin

    1997-11-01

    A central problem in forming accurate regression equations in QSAR studies isthe selection of appropriate descriptors for the compounds under study. Wedescribe a novel procedure for using inductive logic programming (ILP) todiscover new indicator variables (attributes) for QSAR problems, and show thatthese improve the accuracy of the derived regression equations. ILP techniqueshave previously been shown to work well on drug design problems where thereis a large structural component or where clear comprehensible rules arerequired. However, ILP techniques have had the disadvantage of only being ableto make qualitative predictions (e.g. active, inactive) and not to predictreal numbers (regression). We unify ILP and linear regression techniques togive a QSAR method that has the strength of ILP at describing stericstructure, with the familiarity and power of linear regression. We evaluatedthe utility of this new QSAR technique by examining the prediction ofbiological activity with and without the addition of new structural indicatorvariables formed by ILP. In three out of five datasets examined the additionof ILP variables produced statistically better results (P < 0.01) over theoriginal description. The new ILP variables did not increase the overallcomplexity of the derived QSAR equations and added insight into possiblemechanisms of action. We conclude that ILP can aid in the process of drugdesign.

  16. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  17. Simple linear and multivariate regression models.

    PubMed

    Rodríguez del Águila, M M; Benítez-Parejo, N

    2011-01-01

    In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  18. Dynamic stability and handling qualities tests on a highly augmented, statically unstable airplane

    NASA Technical Reports Server (NTRS)

    Gera, Joseph; Bosworth, John T.

    1987-01-01

    This paper describes some novel flight tests and analysis techniques in the flight dynamics and handling qualities area. These techniques were utilized during the initial flight envelope clearance of the X-29A aircraft and were largely responsible for the completion of the flight controls clearance program without any incidents or significant delays. The resulting open-loop and closed-loop frequency responses and the time history comparison using flight and linear simulation data are discussed.

  19. Parameter estimation supplement to the Mission Analysis Evaluation and Space Trajectory Operations program (MAESTRO)

    NASA Technical Reports Server (NTRS)

    Bjorkman, W. S.; Uphoff, C. W.

    1973-01-01

    This Parameter Estimation Supplement describes the PEST computer program and gives instructions for its use in determination of lunar gravitation field coefficients. PEST was developed for use in the RAE-B lunar orbiting mission as a means of lunar field recovery. The observations processed by PEST are short-arc osculating orbital elements. These observations are the end product of an orbit determination process obtained with another program. PEST's end product it a set of harmonic coefficients to be used in long-term prediction of the lunar orbit. PEST employs some novel techniques in its estimation process, notably a square batch estimator and linear variational equations in the orbital elements (both osculating and mean) for measurement sensitivities. The program's capabilities are described, and operating instructions and input/output examples are given. PEST utilizes MAESTRO routines for its trajectory propagation. PEST's program structure and subroutines which are not common to MAESTRO are described. Some of the theoretical background information for the estimation process, and a derivation of linear variational equations for the Method 7 elements are included.

  20. Sweet promises: Candy advertising to children and implications for industry self-regulation.

    PubMed

    Harris, Jennifer L; LoDolce, Megan; Dembek, Cathryn; Schwartz, Marlene B

    2015-12-01

    Candy advertising illustrates limitations of the Children's Food and Beverage Advertising Initiative (CFBAI) self-regulatory program to improve food marketing to children. Participating companies pledge to not advertise candy in child-directed media. Yet independent analyses show that children viewed 65% more candy ads on U.S. television in 2011 than in 2007, before CFBAI implementation. The present research corroborates these findings, characterizes the increase, and examines how CFBAI-participating and non-participating companies use child-targeted techniques and media placement to advertise candy on U.S. television. Content analysis identified child-targeted messages and techniques in 2011 television candy ads, and Nielsen data (2008-2011) quantified candy advertising viewed on children's and other types of television programming. Differences between brands according to CFBAI status and use of child-targeted techniques in ads are evaluated. Data were obtained and analyzed in 2013. CFBAI-company non-approved brands represented 65% of candy ads viewed by children in 2011, up from 45% in 2008, and 77% of these ads contained child-targeted techniques. Although CFBAI companies only placed ads for approved brands on children's networks, 31% of ads viewed by children for CFBAI non-approved brands appeared on networks with higher-than-average youth audiences. CFBAI non-participating companies placed child-targeted candy ads primarily on children's networks. Despite CFBAI pledges, companies continue to advertise candy during programming with large youth audiences utilizing techniques that appeal to children. Both increased CFBAI participation and a more effective definition of "child-directed advertising" are required to reduce children's exposure to targeted advertising for foods that can harm their health. Copyright © 2015 Elsevier Ltd. All rights reserved.

Top