Sample records for orthogonal decomposition analysis

  1. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  2. Effectiveness of Modal Decomposition for Tapping Atomic Force Microscopy Microcantilevers in Liquid Environment.

    PubMed

    Kim, Il Kwang; Lee, Soo Il

    2016-05-01

    The modal decomposition of tapping mode atomic force microscopy microcantilevers in liquid environments was studied experimentally. Microcantilevers with different lengths and stiffnesses and two sample surfaces with different elastic moduli were used in the experiment. The response modes of the microcantilevers were extracted as proper orthogonal modes through proper orthogonal decomposition. Smooth orthogonal decomposition was used to estimate the resonance frequency directly. The effects of the tapping setpoint and the elastic modulus of the sample under test were examined in terms of their multi-mode responses with proper orthogonal modes, proper orthogonal values, smooth orthogonal modes and smooth orthogonal values. Regardless of the stiffness of the microcantilever under test, the first mode was dominant in tapping mode atomic force microscopy under normal operating conditions. However, at lower tapping setpoints, the flexible microcantilever showed modal distortion and noise near the tip when tapping on a hard sample. The stiff microcantilever had a higher mode effect on a soft sample at lower tapping setpoints. Modal decomposition for tapping mode atomic force microscopy can thus be used to estimate the characteristics of samples in liquid environments.

  3. Constrained reduced-order models based on proper orthogonal decomposition

    DOE PAGES

    Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...

    2017-04-09

    A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.

  4. Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques

    DTIC Science & Technology

    2016-07-05

    are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution

  5. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  6. Analyzing Transient Turbuelnce in a Stenosed Carotid Artery by Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Grinberg, Leopold; Yakhot, Alexander; Karniadakis, George

    2009-11-01

    High resolution 3D simulation (involving 100M degrees of freedom) were employed to study transient turbulent flow in a carotid arterial bifurcation with a stenosed internal carotid artery (ICA). In the performed simulation an intermittent (in space and time) laminar-turbulent-laminar regime was observed. The simulation reveals the mechanism of the onset of turbulent flow in the stenosed ICA where the narrowing in the artery generates a strong jet flow. Time- and space-window Proper Orthogonal Decomposition (POD) was applied to quantify the different flow regimes in the occluded artery. A simplified version of the POD analysis that utilizes 2D slices only - more appropriate in the clinical setting - was also investigated.

  7. Model reconstruction using POD method for gray-box fault detection

    NASA Technical Reports Server (NTRS)

    Park, H. G.; Zak, M.

    2003-01-01

    This paper describes using Proper Orthogonal Decomposition (POD) method to create low-order dynamical models for the Model Filter component of Beacon-based Exception Analysis for Multi-missions (BEAM).

  8. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  9. Use of Proper Orthogonal Decomposition Towards Time-resolved Image Analysis of Sprays

    DTIC Science & Technology

    2011-03-15

    High-speed movies of optically dense sprays exiting a Gas-Centered Swirl Coaxial (GCSC) injector are subjected to image analysis to determine spray...sequence prior to image analysis . Results of spray morphology including spray boundary, widths, angles and boundary oscillation frequencies, are

  10. High-speed imaging of submerged jet: visualization analysis using proper orthogonality decomposition

    NASA Astrophysics Data System (ADS)

    Liu, Yingzheng; He, Chuangxin

    2016-11-01

    In the present study, the submerged jet at low Reynolds numbers was visualized using laser induced fluoresce and high-speed imaging in a water tank. Well-controlled calibration was made to determine linear dependency region of the fluoresce intensity on its concentration. Subsequently, the jet fluid issuing from a circular pipe was visualized using a high-speed camera. The animation sequence of the visualized jet flow field was supplied for the snapshot proper orthogonality decomposition (POD) analysis. Spatio-temporally varying structures superimposed in the unsteady fluid flow were identified, e.g., the axisymmetric mode and the helical mode, which were reflected from the dominant POD modes. The coefficients of the POD modes give strong indication of temporal and spectral features of the corresponding unsteady events. The reconstruction using the time-mean visualization and the selected POD modes was conducted to reveal the convective motion of the buried vortical structures. National Natural Science Foundation of China.

  11. Proper orthogonal decomposition analysis for cycle-to-cycle variations of engine flow. Effect of a control device in an inlet pipe

    NASA Astrophysics Data System (ADS)

    Vu, Trung-Thanh; Guibert, Philippe

    2012-06-01

    This paper aims to investigate cycle-to-cycle variations of non-reacting flow inside a motored single-cylinder transparent engine in order to judge the insertion amplitude of a control device able to displace linearly inside the inlet pipe. Three positions corresponding to three insertion amplitudes are implemented to modify the main aerodynamic properties from one cycle to the next. Numerous particle image velocimetry (PIV) two-dimensional velocity fields following cycle database are post-treated to discriminate specific contributions of the fluctuating flow. We performed a multiple snapshot proper orthogonal decomposition (POD) in the tumble plane of a pent roof SI engine. The analytical process consists of a triple decomposition for each instantaneous velocity field into three distinctive parts named mean part, coherent part and turbulent part. The 3rd- and 4th-centered statistical moments of the proper orthogonal decomposition (POD)-filtered velocity field as well as the probability density function of the PIV realizations proved that the POD extracts different behaviors of the flow. Especially, the cyclic variability is assumed to be contained essentially in the coherent part. Thus, the cycle-to-cycle variations of the engine flows might be provided from the corresponding POD temporal coefficients. It has been shown that the in-cylinder aerodynamic dispersions can be adapted and monitored by controlling the insertion depth of the control instrument inside the inlet pipe.

  12. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less

  13. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  14. [Detection of constitutional types of EEG using the orthogonal decomposition method].

    PubMed

    Kuznetsova, S M; Kudritskaia, O V

    1987-01-01

    The authors present an algorithm of investigation into the processes of brain bioelectrical activity with the help of an orthogonal decomposition device intended for the identification of constitutional types of EEGs. The method has helped to effectively solve the task of the diagnosis of constitutional types of EEGs, which are determined by a variable degree of hereditary predisposition for longevity or cerebral stroke.

  15. Coherent vorticity extraction in resistive drift-wave turbulence: Comparison of orthogonal wavelets versus proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B

    2011-01-01

    We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less

  16. Structural Technology Evaluation and Analysis Program (STEAP). Delivery Order 0046: Multiscale Modeling of Composite Structures Subjected to Cyclic Loading

    DTIC Science & Technology

    2012-09-01

    on transformation field analysis [19], proper orthogonal decomposition [63], eigenstrains [23], and others [1, 29, 39] have brought significant...commercial finite element software (Abaqus) along with the user material subroutine utility ( UMAT ) is employed to solve these problems. In this section...Symmetric Coefficients TFA: Transformation Field Analysis UMAT : User Material Subroutine

  17. Proper Orthogonal Decomposition on Experimental Multi-phase Flow in a Pipe

    NASA Astrophysics Data System (ADS)

    Viggiano, Bianca; Tutkun, Murat; Cal, Raúl Bayoán

    2016-11-01

    Multi-phase flow in a 10 cm diameter pipe is analyzed using proper orthogonal decomposition. The data were obtained using X-ray computed tomography in the Well Flow Loop at the Institute for Energy Technology in Kjeller, Norway. The system consists of two sources and two detectors; one camera records the vertical beams and one camera records the horizontal beams. The X-ray system allows measurement of phase holdup, cross-sectional phase distributions and gas-liquid interface characteristics within the pipe. The mathematical framework in the context of multi-phase flows is developed. Phase fractions of a two-phase (gas-liquid) flow are analyzed and a reduced order description of the flow is generated. Experimental data deepens the complexity of the analysis with limited known quantities for reconstruction. Comparison between the reconstructed fields and the full data set allows observation of the important features. The mathematical description obtained from the decomposition will deepen the understanding of multi-phase flow characteristics and is applicable to fluidized beds, hydroelectric power and nuclear processes to name a few.

  18. Three-pattern decomposition of global atmospheric circulation: part I—decomposition model and theorems

    NASA Astrophysics Data System (ADS)

    Hu, Shujuan; Chou, Jifan; Cheng, Jianbo

    2018-04-01

    In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.

  19. Proper orthogonal decomposition analysis of scanning laser Doppler vibrometer measurements of plaster status at the U.S. Capitol

    NASA Astrophysics Data System (ADS)

    Vignola, Joseph F.; Bucaro, Joseph A.; Tressler, James F.; Ellingston, Damon; Kurdila, Andrew J.; Adams, George; Marchetti, Barbara; Agnani, Alexia; Esposito, Enrico; Tomasini, Enrico P.

    2004-06-01

    A large-scale survey (~700 m2) of frescos and wall paintings was undertaken in the U.S. Capitol Building in Washington, D.C. to identify regions that may need structural repair due to detachment, delamination, or other defects. The survey encompassed eight pre-selected spaces including: Brumidi's first work at the Capitol building in the House Appropriations Committee room; the Parliamentarian's office; the House Speaker's office; the Senate Reception room; the President's Room; and three areas of the Brumidi Corridors. Roughly 60% of the area surveyed was domed or vaulted ceilings, the rest being walls. Approximately 250 scans were done ranging in size from 1 to 4 m2. The typical mesh density was 400 scan points per square meter. A common approach for post-processing time series called Proper Orthogonal Decomposition, or POD, was adapted to frequency-domain data in order to extract the essential features of the structure. We present a POD analysis for one of these panels, pinpointing regions that have experienced severe substructural degradation.

  20. Lossless and Sufficient - Invariant Decomposition of Deterministic Target

    NASA Astrophysics Data System (ADS)

    Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio

    2011-03-01

    The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.

  1. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  2. Modal decomposition of turbulent supersonic cavity

    NASA Astrophysics Data System (ADS)

    Soni, R. K.; Arya, N.; De, A.

    2018-06-01

    Self-sustained oscillations in a Mach 3 supersonic cavity with a length-to-depth ratio of three are investigated using wall-modeled large eddy simulation methodology for ReD = 3.39× 105 . The unsteady data obtained through computation are utilized to investigate the spatial and temporal evolution of the flow field, especially the second invariant of the velocity tensor, while the phase-averaged data are analyzed over a feedback cycle to study the spatial structures. This analysis is accompanied by the proper orthogonal decomposition (POD) data, which reveals the presence of discrete vortices along the shear layer. The POD analysis is performed in both the spanwise and streamwise planes to extract the coherence in flow structures. Finally, dynamic mode decomposition is performed on the data sequence to obtain the dynamic information and deeper insight into the self-sustained mechanism.

  3. Spatial patterns of soil moisture connected to monthly-seasonal precipitation variability in a monsoon region

    Treesearch

    Yongqiang Liu

    2003-01-01

    The relations between monthly-seasonal soil moisture and precipitation variability are investigated by identifying the coupled patterns of the two hydrological fields using singular value decomposition (SVD). SVD is a technique of principal component analysis similar to empirical orthogonal knctions (EOF). However, it is applied to two variables simultaneously and is...

  4. An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.

    PubMed

    Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin

    2016-12-01

    Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.

  5. Observations on the Proper Orthogonal Decomposition

    NASA Technical Reports Server (NTRS)

    Berkooz, Gal

    1992-01-01

    The Proper Orthogonal Decomposition (P.O.D.), also known as the Karhunen-Loeve expansion, is a procedure for decomposing a stochastic field in an L(2) optimal sense. It is used in diverse disciplines from image processing to turbulence. Recently the P.O.D. is receiving much attention as a tool for studying dynamics of systems in infinite dimensional space. This paper reviews the mathematical fundamentals of this theory. Also included are results on the span of the eigenfunction basis, a geometric corollary due to Chebyshev's inequality and a relation between the P.O.D. symmetry and ergodicity.

  6. Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors

    NASA Astrophysics Data System (ADS)

    Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea

    2018-03-01

    In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.

  7. Turbulent Flow Over Large Roughness Elements: Effect of Frontal and Plan Solidity on Turbulence Statistics and Structure

    NASA Astrophysics Data System (ADS)

    Placidi, M.; Ganapathisubramani, B.

    2018-04-01

    Wind-tunnel experiments were carried out on fully-rough boundary layers with large roughness (δ /h ≈ 10, where h is the height of the roughness elements and δ is the boundary-layer thickness). Twelve different surface conditions were created by using LEGO™ bricks of uniform height. Six cases are tested for a fixed plan solidity (λ _P) with variations in frontal density (λ _F), while the other six cases have varying λ _P for fixed λ _F. Particle image velocimetry and floating-element drag-balance measurements were performed. The current results complement those contained in Placidi and Ganapathisubramani (J Fluid Mech 782:541-566, 2015), extending the previous analysis to the turbulence statistics and spatial structure. Results indicate that mean velocity profiles in defect form agree with Townsend's similarity hypothesis with varying λ _F, however, the agreement is worse for cases with varying λ _P. The streamwise and wall-normal turbulent stresses, as well as the Reynolds shear stresses, show a lack of similarity across most examined cases. This suggests that the critical height of the roughness for which outer-layer similarity holds depends not only on the height of the roughness, but also on the local wall morphology. A new criterion based on shelter solidity, defined as the sheltered plan area per unit wall-parallel area, which is similar to the `effective shelter area' in Raupach and Shaw (Boundary-Layer Meteorol 22:79-90, 1982), is found to capture the departure of the turbulence statistics from outer-layer similarity. Despite this lack of similarity reported in the turbulence statistics, proper orthogonal decomposition analysis, as well as two-point spatial correlations, show that some form of universal flow structure is present, as all cases exhibit virtually identical proper orthogonal decomposition mode shapes and correlation fields. Finally, reduced models based on proper orthogonal decomposition reveal that the small scales of the turbulence play a significant role in assessing outer-layer similarity.

  8. A study of the Alboran sea mesoscale system by means of empirical orthogonal function decomposition of satellite data

    NASA Astrophysics Data System (ADS)

    Baldacci, A.; Corsini, G.; Grasso, R.; Manzella, G.; Allen, J. T.; Cipollini, P.; Guymer, T. H.; Snaith, H. M.

    2001-05-01

    This paper presents the results of a combined empirical orthogonal function (EOF) analysis of Advanced Very High Resolution Radiometer (AVHRR) sea surface temperature (SST) data and sea-viewing wide field-of-view sensor (SeaWiFS) chlorophyll concentration data over the Alboran Sea (Western Mediterranean), covering a period of 1 year (November 1997-October 1998). The aim of this study is to go beyond the limited temporal extent of available in situ measurements by inferring the temporal and spatial variability of the Alboran Gyre system from long temporal series of satellite observations, in order to gain insight on the interactions between the circulation and the biological activity in the system. In this context, EOF decomposition permits concise and synoptic representation of the effects of physical and biological phenomena traced by SST and chlorophyll concentration. Thus, it is possible to focus the analysis on the most significant phenomena and to understand better the complex interactions between physics and biology at the mesoscale. The results of the EOF analysis of AVHRR-SST and SeaWiFS-chlorophyll concentration data are presented and discussed in detail. These improve and complement the knowledge acquired during the in situ observational campaigns of the MAST-III Observations and Modelling of Eddy scale Geostrophic and Ageostrophic motion (OMEGA) Project.

  9. The Rigid Orthogonal Procrustes Rotation Problem

    ERIC Educational Resources Information Center

    ten Berge, Jos M. F.

    2006-01-01

    The problem of rotating a matrix orthogonally to a best least squares fit with another matrix of the same order has a closed-form solution based on a singular value decomposition. The optimal rotation matrix is not necessarily rigid, but may also involve a reflection. In some applications, only rigid rotations are permitted. Gower (1976) has…

  10. On the Hodge-type decomposition and cohomology groups of k-Cauchy-Fueter complexes over domains in the quaternionic space

    NASA Astrophysics Data System (ADS)

    Chang, Der-Chen; Markina, Irina; Wang, Wei

    2016-09-01

    The k-Cauchy-Fueter operator D0(k) on one dimensional quaternionic space H is the Euclidean version of spin k / 2 massless field operator on the Minkowski space in physics. The k-Cauchy-Fueter equation for k ≥ 2 is overdetermined and its compatibility condition is given by the k-Cauchy-Fueter complex. In quaternionic analysis, these complexes play the role of Dolbeault complex in several complex variables. We prove that a natural boundary value problem associated to this complex is regular. Then by using the theory of regular boundary value problems, we show the Hodge-type orthogonal decomposition, and the fact that the non-homogeneous k-Cauchy-Fueter equation D0(k) u = f on a smooth domain Ω in H is solvable if and only if f satisfies the compatibility condition and is orthogonal to the set ℋ(k)1 (Ω) of Hodge-type elements. This set is isomorphic to the first cohomology group of the k-Cauchy-Fueter complex over Ω, which is finite dimensional, while the second cohomology group is always trivial.

  11. Focused-based multifractal analysis of the wake in a wind turbine array utilizing proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Kadum, Hawwa; Ali, Naseem; Cal, Raúl

    2016-11-01

    Hot-wire anemometry measurements have been performed on a 3 x 3 wind turbine array to study the multifractality of the turbulent kinetic energy dissipations. A multifractal spectrum and Hurst exponents are determined at nine locations downstream of the hub height, and bottom and top tips. Higher multifractality is found at 0.5D and 1D downstream of the bottom tip and hub height. The second order of the Hurst exponent and combination factor show an ability to predict the flow state in terms of its development. Snapshot proper orthogonal decomposition is used to identify the coherent and incoherent structures and to reconstruct the stochastic velocity using a specific number of the POD eigenfunctions. The accumulation of the turbulent kinetic energy in top tip location exhibits fast convergence compared to the bottom tip and hub height locations. The dissipation of the large and small scales are determined using the reconstructed stochastic velocities. The higher multifractality is shown in the dissipation of the large scale compared to small-scale dissipation showing consistency with the behavior of the original signals.

  12. Harmonic analysis of traction power supply system based on wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.

  13. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  14. Modeling of a pitching and plunging airfoil using experimental flow field and load measurements

    NASA Astrophysics Data System (ADS)

    Troshin, Victor; Seifert, Avraham

    2018-01-01

    The main goal of the current paper is to outline a low-order modeling procedure of a heaving airfoil in a still fluid using experimental measurements. Due to its relative simplicity, the proposed procedure is applicable for the analysis of flow fields within complex and unsteady geometries and it is suitable for analyzing the data obtained by experimentation. Currently, this procedure is used to model and predict the flow field evolution using a small number of low profile load sensors and flow field measurements. A time delay neural network is used to estimate the flow field. The neural network estimates the amplitudes of the most energetic modes using four sensory inputs. The modes are calculated using proper orthogonal decomposition of the flow field data obtained experimentally by time-resolved, phase-locked particle imaging velocimetry. To permit the use of proper orthogonal decomposition, the measured flow field is mapped onto a stationary domain using volume preserving transformation. The analysis performed by the model showed good estimation quality within the parameter range used in the training procedure. However, the performance deteriorates for cases out of this range. This situation indicates that, to improve the robustness of the model, both the decomposition and the training data sets must be diverse in terms of input parameter space. In addition, the results suggest that the property of volume preservation of the mapping does not affect the model quality as long as the model is not based on the Galerkin approximation. Thus, it may be relaxed for cases with more complex geometry and kinematics.

  15. An examination of coherent structures in a lobed mixer using multifractal measures in conjunction with the proper orthogonal decomposition

    NASA Technical Reports Server (NTRS)

    Ukeiley, L.; Varghese, M.; Glauser, M.; Valentine, D.

    1991-01-01

    A 'lobed mixer' device that enhances mixing through secondary flows and streamwise vorticity is presently studied within the framework of multifractal-measures theory, in order to deepen understanding of velocity time trace data gathered on its operation. Proper orthogonal decomposition-based knowledge of coherent structures has been applied to obtain the generalized fractal dimensions and multifractal spectrum of several proper eigenmodes for data samples of the velocity time traces; this constitutes a marked departure from previous multifractal theory applications to self-similar cascades. In certain cases, a single dimension may suffice to capture the entire spectrum of scaling exponents for the velocity time trace.

  16. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less

  17. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  18. Killing-Yano tensors in spaces admitting a hypersurface orthogonal Killing vector

    NASA Astrophysics Data System (ADS)

    Garfinkle, David; Glass, E. N.

    2013-03-01

    Methods are presented for finding Killing-Yano tensors, conformal Killing-Yano tensors, and conformal Killing vectors in spacetimes with a hypersurface orthogonal Killing vector. These methods are similar to a method developed by the authors for finding Killing tensors. In all cases one decomposes both the tensor and the equation it satisfies into pieces along the Killing vector and pieces orthogonal to the Killing vector. Solving the separate equations that result from this decomposition requires less computing than integrating the original equation. In each case, examples are given to illustrate the method.

  19. Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality

    NASA Astrophysics Data System (ADS)

    Ayala, Mario; Carinci, Gioia; Redig, Frank

    2018-06-01

    We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.

  20. A modal analysis of lamellar diffraction gratings in conical mountings

    NASA Technical Reports Server (NTRS)

    Li, Lifeng

    1992-01-01

    A rigorous modal analysis of lamellar grating, i.e., gratings having rectangular grooves, in conical mountings is presented. It is an extension of the analysis of Botten et al. which considered non-conical mountings. A key step in the extension is a decomposition of the electromagnetic field in the grating region into two orthogonal components. A computer program implementing this extended modal analysis is capable of dealing with plane wave diffraction by dielectric and metallic gratings with deep grooves, at arbitrary angles of incidence, and having arbitrary incident polarizations. Some numerical examples are included.

  1. On the uniqueness of the constrained space orbital variation (CSOV) technique

    NASA Technical Reports Server (NTRS)

    Bauschlicher, C. W., Jr.

    1986-01-01

    Several CSOV analyses are performed for the 1Sigma(+) state of NiCO, and it is shown that the importance of the CO sigma donation, Ni pi back donation, and interunit polarizations are virtually independent of the order of the CSOV steps, provided that the open-shell 3d sigma and 4s Ni orbitals are orthogonalized to the CO. This order of orthogonalization is consistent with the polarization of the Ni observed in the unconstrained SCF wavefunction. If instead the CO is orthogonalized to the open-shell Ni orbitals, the frozen orbital repulsion and entire CSOV analysis becomes unphysical. A comparison of the SCF and CAS SCF descriptions for the NiCO 1Sigma(+) state shows the importance of the s to d promotion and sd hybridization in reducing the repulsion and increasing the Ni to CO pi bonding. For LiF, CSOV analyses starting from both the neutral and ionic asymptotes show the bonding to be predominantly Li(+) - F(-). These examples show the uniqueness of the CSOV decomposition.

  2. Proper Orthogonal Decomposition in Optimal Control of Fluids

    NASA Technical Reports Server (NTRS)

    Ravindran, S. S.

    1999-01-01

    In this article, we present a reduced order modeling approach suitable for active control of fluid dynamical systems based on proper orthogonal decomposition (POD). The rationale behind the reduced order modeling is that numerical simulation of Navier-Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. We examine the possibility of obtaining reduced order models that reduce computational complexity associated with the Navier-Stokes equations while capturing the essential dynamics by using the POD. The POD allows extraction of certain optimal set of basis functions, perhaps few, from a computational or experimental data-base through an eigenvalue analysis. The solution is then obtained as a linear combination of these optimal set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations. We here use it in active control of fluid flows governed by the Navier-Stokes equations. We show that the resulting reduced order model can be very efficient for the computations of optimization and control problems in unsteady flows. Finally, implementational issues and numerical experiments are presented for simulations and optimal control of fluid flow through channels.

  3. On the Hilbert-Huang Transform Theoretical Developments

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Patrick, David; Hestnes, Phyllis

    2005-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as linearity, of being stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectrum analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposition data, the HHT allows spectrum analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a near orthogonal adaptive basis, a basis that is derived from the data. The IMFs can be further analyzed for spectrum interpretation by the classical Hilbert Transform. A new engineering spectrum analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications post additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs near orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the developments of new HHT processing options, such as real-time and 2-D processing using Field Programmable Array (FPGA) computational resources, enhanced HHT synthesis, and broaden the scope of HHT applications for signal processing.

  4. Multi-scale statistical analysis of coronal solar activity

    DOE PAGES

    Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.

    2016-07-08

    Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.

  5. Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-07-01

    Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.

  6. LES of flow in the street canyon

    NASA Astrophysics Data System (ADS)

    Fuka, Vladimír; Brechler, Josef

    2012-04-01

    Results of computer simulation of flow over a series of street canyons are presented in this paper. The setup is adapted from an experimental study by [4] with two different shapes of buildings. The problem is simulated by an LES model CLMM (Charles University Large Eddy Microscale Model) and results are analysed using proper orthogonal decomposition and spectral analysis. The results in the channel (layout from the experiment) are compared with results with a free top boundary.

  7. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    PubMed

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  8. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Yu, Simin; Cai, Jianping

    2016-12-01

    Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

  9. FACETS: multi-faceted functional decomposition of protein interaction networks.

    PubMed

    Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes

    2012-10-15

    The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein-protein interaction (PPI) network using graph theoretic analysis. Despite the recent progress, systems level analysis of high-throughput PPIs remains a daunting task because of the amount of data they present. In this article, we propose a novel PPI network decomposition algorithm called FACETS in order to make sense of the deluge of interaction data using Gene Ontology (GO) annotations. FACETS finds not just a single functional decomposition of the PPI network, but a multi-faceted atlas of functional decompositions that portray alternative perspectives of the functional landscape of the underlying PPI network. Each facet in the atlas represents a distinct interpretation of how the network can be functionally decomposed and organized. Our algorithm maximizes interpretative value of the atlas by optimizing inter-facet orthogonality and intra-facet cluster modularity. We tested our algorithm on the global networks from IntAct, and compared it with gold standard datasets from MIPS and KEGG. We demonstrated the performance of FACETS. We also performed a case study that illustrates the utility of our approach. Supplementary data are available at the Bioinformatics online. Our software is available freely for non-commercial purposes from: http://www.cais.ntu.edu.sg/~assourav/Facets/

  10. Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr

    2010-03-24

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stabilitymore » and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.« less

  11. Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ilbeigi, Shahab; Chelidze, David

    2017-11-01

    Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.

  12. Plane waves and structures in turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Sirovich, L.; Ball, K. S.; Keefe, L. R.

    1990-01-01

    A direct simulation of turbulent flow in a channel is analyzed by the method of empirical eigenfunctions (Karhunen-Loeve procedure, proper orthogonal decomposition). This analysis reveals the presence of propagating plane waves in the turbulent flow. The velocity of propagation is determined by the flow velocity at the location of maximal Reynolds stress. The analysis further suggests that the interaction of these waves appears to be essential to the local production of turbulence via bursting or sweeping events in the turbulent boundary layer, with the additional suggestion that the fast acting plane waves act as triggers.

  13. On the physical significance of the Effective Independence method for sensor placement

    NASA Astrophysics Data System (ADS)

    Jiang, Yaoguang; Li, Dongsheng; Song, Gangbing

    2017-05-01

    Optimally deploy sparse sensors for better damage identification and structural health monitoring is always a challenging task. The Effective Independence(EI) is one of the most influential sensor placement method and to be discussed in the paper. Specifically, the effect of the different weighting coefficients on the maximization of the Fisher information matrix(FIM) and the physical significance of the re-orthogonalization of modal shapes through QR decomposition in the EI method are addressed. By analyzing the widely used EI method, we found that the absolute identification space put forward along with the EI method is preferable to ensuring the maximization of the FIM, instead of the original EI coefficient which was post-multiolied by a weighting matrix. That is, deleting the row with the minimum EI coefficient can’t achieve the objective of maximizing the trace of FIM as initially conceived. Furthermore, we observed that in the computation of EI method, the sum of each retained row in the absolute identification space is a constant in each iteration. This potential property can be revealed distinctively by the product of target mode and its transpose, and its form is similar to an alternative formula of the EI method through orthogonal-triangular(QR) decomposition previously proposed by the authors. With it, the physical significance of re-orthogonalization of modal shapes through QR decomposition in the computation of EI method can be obviously manifested from a new perspective. Finally, two simple examples are provided to demonstrate the above two observations.

  14. A New Look at Rainfall Fluctuations and Scaling Properties of Spatial Rainfall Using Orthogonal Wavelets.

    NASA Astrophysics Data System (ADS)

    Kumar, Praveen; Foufoula-Georgiou, Efi

    1993-02-01

    It has been observed that the finite-dimensional distribution functions of rainfall cannot obey simple scaling laws due to rainfall intermittency (mixed distribution with an atom at zero) and the probability of rainfall being an increasing function of area. Although rainfall fluctuations do not suffer these limitations, it is interesting to note that very few attempts have been made to study them in terms of their self-similarity characteristics. This is due to the lack of unambiguous definition of fluctuations in multidimensions. This paper shows that wavelet transforms offer a convenient and consistent method for the decomposition of inhomogeneous and anisotropic rainfall fields in two dimensions and that the components of this decomposition can be looked at as fluctuations of the rainfall field. It is also shown that under some mild assumptions, the component fields can be treated as homogeneous and thus are amenable to second-order analysis, which can provide useful insight into the nature of the process. The fact that wavelet transforms are a space-scale method also provides a convenient tool to study scaling characteristics of the process. Orthogonal wavelets are used, and these properties are investigated for a squall-line storm to study the presence of self-similarity.

  15. A new look at rainfall fluctuations and scaling properties of spatial rainfall using orthogonal wavelets

    NASA Technical Reports Server (NTRS)

    Kumar, Praveen; Foufoula-Georgiou, Efi

    1993-01-01

    It has been observed that the finite-dimensional distribution functions of rainfall cannot obey simple scaling laws due to rainfall intermittency (mixed distribution with an atom at zero) and the probability of rainfall being an increasing function of area. Although rainfall fluctuations do not suffer these limitations, it is interesting to note that very few attempts have been made to study them in terms of their self-similarity characteristics. This is due to the lack of unambiguous definition of fluctuations in multidimensions. This paper shows that wavelet transforms offer a convenient and consistent method for the decomposition of inhomogeneous and anisotropic rainfall fields in two dimensions and that the components of this decomposition can be looked at as fluctuations of the rainfall field. It is also shown that under some mild assumptions, the component fields can be treated as homogeneous and thus are amenable to second-order analysis, which can provide useful insight into the nature of the process. The fact that wavelet transforms are a space-scale method also provides a convenient tool to study scaling characteristics of the process. Orthogonal wavelets are used, and these properties are investigated for a squall-line storm to study the presence of self-similarity.

  16. Visual analysis of variance: a tool for quantitative assessment of fMRI data processing and analysis.

    PubMed

    McNamee, R L; Eddy, W F

    2001-12-01

    Analysis of variance (ANOVA) is widely used for the study of experimental data. Here, the reach of this tool is extended to cover the preprocessing of functional magnetic resonance imaging (fMRI) data. This technique, termed visual ANOVA (VANOVA), provides both numerical and pictorial information to aid the user in understanding the effects of various parts of the data analysis. Unlike a formal ANOVA, this method does not depend on the mathematics of orthogonal projections or strictly additive decompositions. An illustrative example is presented and the application of the method to a large number of fMRI experiments is discussed. Copyright 2001 Wiley-Liss, Inc.

  17. Superpartner mass measurement technique using 1D orthogonal decompositions of the Cambridge transverse mass variable M(T2).

    PubMed

    Konar, Partha; Kong, Kyoungchul; Matchev, Konstantin T; Park, Myeonghun

    2010-07-30

    We propose a new model-independent technique for mass measurements in missing energy events at hadron colliders. We illustrate our method with the most challenging case of a single-step decay chain. We consider inclusive same-sign chargino pair production in supersymmetry, followed by leptonic decays to sneutrinos χ+ χ+ → ℓ+ ℓ'+ ν(ℓ)ν(ℓ') and invisible decays ν(ℓ) → ν(ℓ) χ(1)(0). We introduce two one-dimensional decompositions of the Cambridge MT2 variable: M(T2∥) and M(T2⊥), on the direction of the upstream transverse momentum P→T and the direction orthogonal to it, respectively. We show that the sneutrino mass Mc can be measured directly by minimizing the number of events N(Mc) in which MT2 exceeds a certain threshold, conveniently measured from the end point M(T2⊥)(max) (Mc).

  18. Superpartner Mass Measurement Technique using 1D Orthogonal Decompositions of the Cambridge Transverse Mass Variable MT2

    NASA Astrophysics Data System (ADS)

    Konar, Partha; Kong, Kyoungchul; Matchev, Konstantin T.; Park, Myeonghun

    2010-07-01

    We propose a new model-independent technique for mass measurements in missing energy events at hadron colliders. We illustrate our method with the most challenging case of a single-step decay chain. We consider inclusive same-sign chargino pair production in supersymmetry, followed by leptonic decays to sneutrinos χ+χ+→ℓ+ℓ'+ν˜ℓν˜ℓ' and invisible decays ν˜ℓ→νℓχ˜10. We introduce two one-dimensional decompositions of the Cambridge MT2 variable: MT2∥ and MT2⊥, on the direction of the upstream transverse momentum P→T and the direction orthogonal to it, respectively. We show that the sneutrino mass Mc can be measured directly by minimizing the number of events N(M˜c) in which MT2 exceeds a certain threshold, conveniently measured from the end point MT2⊥max⁡(M˜c).

  19. Hemodynamics of a Patient-Specific Aneurysm Model with Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Han, Suyue; Chang, Gary Han; Modarres-Sadeghi, Yahya

    2017-11-01

    Wall shear stress (WSS) and oscillatory shear index (OSI) are two of the most-widely studied hemodynamic quantities in cardiovascular systems that have been shown to have the ability to elicit biological responses of the arterial wall, which could be used to predict the aneurysm development and rupture. In this study, a reduced-order model (ROM) of the hemodynamics of a patient-specific cerebral aneurysm is studied. The snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases of the flow using a CFD training set with known inflow parameters. It was shown that the area of low WSS and high OSI is correlated to higher POD modes. The resulting ROM can reproduce both WSS and OSI computationally for future parametric studies with significantly less computational cost. Agreement was observed between the WSS and OSI values obtained using direct CFD results and ROM results.

  20. Canonical Structure and Orthogonality of Forces and Currents in Irreversible Markov Chains

    NASA Astrophysics Data System (ADS)

    Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes

    2018-03-01

    We discuss a canonical structure that provides a unifying description of dynamical large deviations for irreversible finite state Markov chains (continuous time), Onsager theory, and Macroscopic Fluctuation Theory (MFT). For Markov chains, this theory involves a non-linear relation between probability currents and their conjugate forces. Within this framework, we show how the forces can be split into two components, which are orthogonal to each other, in a generalised sense. This splitting allows a decomposition of the pathwise rate function into three terms, which have physical interpretations in terms of dissipation and convergence to equilibrium. Similar decompositions hold for rate functions at level 2 and level 2.5. These results clarify how bounds on entropy production and fluctuation theorems emerge from the underlying dynamical rules. We discuss how these results for Markov chains are related to similar structures within MFT, which describes hydrodynamic limits of such microscopic models.

  1. Particle image and acoustic Doppler velocimetry analysis of a cross-flow turbine wake

    NASA Astrophysics Data System (ADS)

    Strom, Benjamin; Brunton, Steven; Polagye, Brian

    2017-11-01

    Cross-flow turbines have advantageous properties for converting kinetic energy in wind and water currents to rotational mechanical energy and subsequently electrical power. A thorough understanding of cross-flow turbine wakes aids understanding of rotor flow physics, assists geometric array design, and informs control strategies for individual turbines in arrays. In this work, the wake physics of a scale model cross-flow turbine are investigated experimentally. Three-component velocity measurements are taken downstream of a two-bladed turbine in a recirculating water channel. Time-resolved stereoscopic particle image and acoustic Doppler velocimetry are compared for planes normal to and distributed along the turbine rotational axis. Wake features are described using proper orthogonal decomposition, dynamic mode decomposition, and the finite-time Lyapunov exponent. Consequences for downstream turbine placement are discussed in conjunction with two-turbine array experiments.

  2. Low-order modelling of shallow water equations for sensitivity analysis using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Zokagoa, Jean-Marie; Soulaïmani, Azzeddine

    2012-06-01

    This article presents a reduced-order model (ROM) of the shallow water equations (SWEs) for use in sensitivity analyses and Monte-Carlo type applications. Since, in the real world, some of the physical parameters and initial conditions embedded in free-surface flow problems are difficult to calibrate accurately in practice, the results from numerical hydraulic models are almost always corrupted with uncertainties. The main objective of this work is to derive a ROM that ensures appreciable accuracy and a considerable acceleration in the calculations so that it can be used as a surrogate model for stochastic and sensitivity analyses in real free-surface flow problems. The ROM is derived using the proper orthogonal decomposition (POD) method coupled with Galerkin projections of the SWEs, which are discretised through a finite-volume method. The main difficulty of deriving an efficient ROM is the treatment of the nonlinearities involved in SWEs. Suitable approximations that provide rapid online computations of the nonlinear terms are proposed. The proposed ROM is applied to the simulation of hypothetical flood flows in the Bordeaux breakwater, a portion of the 'Rivière des Prairies' located near Laval (a suburb of Montreal, Quebec). A series of sensitivity analyses are performed by varying the Manning roughness coefficient and the inflow discharge. The results are satisfactorily compared to those obtained by the full-order finite volume model.

  3. Stiffness of a wobbling mass models analysed by a smooth orthogonal decomposition of the skin movement relative to the underlying bone.

    PubMed

    Dumas, Raphaël; Jacquelin, Eric

    2017-09-06

    The so-called soft tissue artefacts and wobbling masses have both been widely studied in biomechanics, however most of the time separately, from either a kinematics or a dynamics point of view. As such, the estimation of the stiffness of the springs connecting the wobbling masses to the rigid-body model of the lower limb, based on the in vivo displacements of the skin relative to the underling bone, has not been performed yet. For this estimation, the displacements of the skin markers in the bone-embedded coordinate systems are viewed as a proxy for the wobbling mass movement. The present study applied a structural vibration analysis method called smooth orthogonal decomposition to estimate this stiffness from retrospective simultaneous measurements of skin and intra-cortical pin markers during running, walking, cutting and hopping. For the translations about the three axes of the bone-embedded coordinate systems, the estimated stiffness coefficients (i.e. between 2.3kN/m and 55.5kN/m) as well as the corresponding forces representing the connection between bone and skin (i.e. up to 400N) and corresponding frequencies (i.e. in the band 10-30Hz) were in agreement with the literature. Consistently with the STA descriptions, the estimated stiffness coefficients were found subject- and task-specific. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Aeroelastic System Development Using Proper Orthogonal Decomposition and Volterra Theory

    NASA Technical Reports Server (NTRS)

    Lucia, David J.; Beran, Philip S.; Silva, Walter A.

    2003-01-01

    This research combines Volterra theory and proper orthogonal decomposition (POD) into a hybrid methodology for reduced-order modeling of aeroelastic systems. The out-come of the method is a set of linear ordinary differential equations (ODEs) describing the modal amplitudes associated with both the structural modes and the POD basis functions for the uid. For this research, the structural modes are sine waves of varying frequency, and the Volterra-POD approach is applied to the fluid dynamics equations. The structural modes are treated as forcing terms which are impulsed as part of the uid model realization. Using this approach, structural and uid operators are coupled into a single aeroelastic operator. This coupling converts a free boundary uid problem into an initial value problem, while preserving the parameter (or parameters) of interest for sensitivity analysis. The approach is applied to an elastic panel in supersonic cross ow. The hybrid Volterra-POD approach provides a low-order uid model in state-space form. The linear uid model is tightly coupled with a nonlinear panel model using an implicit integration scheme. The resulting aeroelastic model provides correct limit-cycle oscillation prediction over a wide range of panel dynamic pressure values. Time integration of the reduced-order aeroelastic model is four orders of magnitude faster than the high-order solution procedure developed for this research using traditional uid and structural solvers.

  5. Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control

    DTIC Science & Technology

    2015-11-10

    the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj  D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the

  6. Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence

    NASA Astrophysics Data System (ADS)

    Hatch, David R.

    This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.

  7. Linear dynamical modes as new variables for data-driven ENSO forecast

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen

    2018-05-01

    A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.

  8. Data-driven Analysis and Prediction of Arctic Sea Ice

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.; Ghil, M.; Yuan, X.; Ting, M.

    2015-12-01

    We present results of data-driven predictive analyses of sea ice over the main Arctic regions. Our approach relies on the Multilayer Stochastic Modeling (MSM) framework of Kondrashov, Chekroun and Ghil [Physica D, 2015] and it leads to prognostic models of sea ice concentration (SIC) anomalies on seasonal time scales.This approach is applied to monthly time series of leading principal components from the multivariate Empirical Orthogonal Function decomposition of SIC and selected climate variables over the Arctic. We evaluate the predictive skill of MSM models by performing retrospective forecasts with "no-look ahead" forup to 6-months ahead. It will be shown in particular that the memory effects included in our non-Markovian linear MSM models improve predictions of large-amplitude SIC anomalies in certain Arctic regions. Furtherimprovements allowed by the MSM framework will adopt a nonlinear formulation, as well as alternative data-adaptive decompositions.

  9. Rotating Wheel Wake

    NASA Astrophysics Data System (ADS)

    Lombard, Jean-Eloi; Xu, Hui; Moxey, Dave; Sherwin, Spencer

    2016-11-01

    For open wheel race-cars, such as Formula One, or IndyCar, the wheels are responsible for 40 % of the total drag. For road cars, drag associated to the wheels and under-carriage can represent 20 - 60 % of total drag at highway cruise speeds. Experimental observations have reported two, three or more pairs of counter rotating vortices, the relative strength of which still remains an open question. The near wake of an unsteady rotating wheel. The numerical investigation by means of direct numerical simulation at ReD =400-1000 is presented here to further the understanding of bifurcations the flow undergoes as the Reynolds number is increased. Direct numerical simulation is performed using Nektar++, the results of which are compared to those of Pirozzoli et al. (2012). Both proper orthogonal decomposition and dynamic mode decomposition, as well as spectral analysis are leveraged to gain unprecedented insight into the bifurcations and subsequent topological differences of the wake as the Reynolds number is increased.

  10. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  11. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  12. FACETS: multi-faceted functional decomposition of protein interaction networks

    PubMed Central

    Seah, Boon-Siew; Bhowmick, Sourav S.; Forbes Dewey, C.

    2012-01-01

    Motivation: The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein–protein interaction (PPI) network using graph theoretic analysis. Despite the recent progress, systems level analysis of high-throughput PPIs remains a daunting task because of the amount of data they present. In this article, we propose a novel PPI network decomposition algorithm called FACETS in order to make sense of the deluge of interaction data using Gene Ontology (GO) annotations. FACETS finds not just a single functional decomposition of the PPI network, but a multi-faceted atlas of functional decompositions that portray alternative perspectives of the functional landscape of the underlying PPI network. Each facet in the atlas represents a distinct interpretation of how the network can be functionally decomposed and organized. Our algorithm maximizes interpretative value of the atlas by optimizing inter-facet orthogonality and intra-facet cluster modularity. Results: We tested our algorithm on the global networks from IntAct, and compared it with gold standard datasets from MIPS and KEGG. We demonstrated the performance of FACETS. We also performed a case study that illustrates the utility of our approach. Contact: seah0097@ntu.edu.sg or assourav@ntu.edu.sg Supplementary information: Supplementary data are available at the Bioinformatics online. Availability: Our software is available freely for non-commercial purposes from: http://www.cais.ntu.edu.sg/∼assourav/Facets/ PMID:22908217

  13. Negative values of quasidistributions and quantum wave and number statistics

    NASA Astrophysics Data System (ADS)

    Peřina, J.; Křepelka, J.

    2018-04-01

    We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.

  14. Evaluation of the Use of Second Generation Wavelets in the Coherent Vortex Simulation Approach

    NASA Technical Reports Server (NTRS)

    Goldstein, D. E.; Vasilyev, O. V.; Wray, A. A.; Rogallo, R. S.

    2000-01-01

    The objective of this study is to investigate the use of the second generation bi-orthogonal wavelet transform for the field decomposition in the Coherent Vortex Simulation of turbulent flows. The performances of the bi-orthogonal second generation wavelet transform and the orthogonal wavelet transform using Daubechies wavelets with the same number of vanishing moments are compared in a priori tests using a spectral direct numerical simulation (DNS) database of isotropic turbulence fields: 256(exp 3) and 512(exp 3) DNS of forced homogeneous turbulence (Re(sub lambda) = 168) and 256(exp 3) and 512(exp 3) DNS of decaying homogeneous turbulence (Re(sub lambda) = 55). It is found that bi-orthogonal second generation wavelets can be used for coherent vortex extraction. The results of a priori tests indicate that second generation wavelets have better compression and the residual field is closer to Gaussian. However, it was found that the use of second generation wavelets results in an integral length scale for the incoherent part that is larger than that derived from orthogonal wavelets. A way of dealing with this difficulty is suggested.

  15. Asymmetric color image encryption based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Yao, Lili; Yuan, Caojin; Qiang, Junjie; Feng, Shaotong; Nie, Shouping

    2017-02-01

    A novel asymmetric color image encryption approach by using singular value decomposition (SVD) is proposed. The original color image is encrypted into a ciphertext shown as an indexed image by using the proposed method. The red, green and blue components of the color image are subsequently encoded into a complex function which is then separated into U, S and V parts by SVD. The data matrix of the ciphertext is obtained by multiplying orthogonal matrices U and V while implementing phase-truncation. Diagonal entries of the three diagonal matrices of the SVD results are abstracted and scrambling combined to construct the colormap of the ciphertext. Thus, the encrypted indexed image covers less space than the original image. For decryption, the original color image cannot be recovered without private keys which are obtained from phase-truncation and the orthogonality of V. Computer simulations are presented to evaluate the performance of the proposed algorithm. We also analyze the security of the proposed system.

  16. Development of Boundary Condition Independent Reduced Order Thermal Models using Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Raghupathy, Arun; Ghia, Karman; Ghia, Urmila

    2008-11-01

    Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.

  17. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    PubMed

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  18. Lumley's PODT definition of large eddies and a trio of numerical procedures. [Proper Orthogonal Decomposition Theorem

    NASA Technical Reports Server (NTRS)

    Payne, Fred R.

    1992-01-01

    Lumley's 1967 Moscow paper provided, for the first time, a completely rational definition of the physically-useful term 'large eddy', popular for a half-century. The numerical procedures based upon his results are: (1) PODT (Proper Orthogonal Decomposition Theorem), which extracts the Large Eddy structure of stochastic processes from physical or computer simulation two-point covariances, and 2) LEIM (Large-Eddy Interaction Model), a predictive scheme for the dynamical large eddies based upon higher order turbulence modeling. Earlier Lumley's work (1964) forms the basis for the final member of the triad of numerical procedures: this predicts the global neutral modes of turbulence which have surprising agreement with both structural eigenmodes and those obtained from the dynamical equations. The ultimate goal of improved engineering design tools for turbulence may be near at hand, partly due to the power and storage of 'supermicrocomputer' workstations finally becoming adequate for the demanding numerics of these procedures.

  19. Characteristic-eddy decomposition of turbulence in a channel

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Moser, Robert D.

    1989-01-01

    Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.

  20. Fast multidimensional ensemble empirical mode decomposition for the analysis of big spatio-temporal datasets.

    PubMed

    Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min

    2016-04-13

    In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.

  1. Orthogonal decomposition of left ventricular remodeling in myocardial infarction

    PubMed Central

    Zhang, Xingyu; Medrano-Gracia, Pau; Ambale-Venkatesh, Bharath; Bluemke, David A.; Cowan, Brett R; Finn, J. Paul; Kadish, Alan H.; Lee, Daniel C.; Lima, Joao A. C.; Young, Alistair A.; Suinesiaputra, Avan

    2017-01-01

    Abstract Left ventricular size and shape are important for quantifying cardiac remodeling in response to cardiovascular disease. Geometric remodeling indices have been shown to have prognostic value in predicting adverse events in the clinical literature, but these often describe interrelated shape changes. We developed a novel method for deriving orthogonal remodeling components directly from any (moderately independent) set of clinical remodeling indices. Results: Six clinical remodeling indices (end-diastolic volume index, sphericity, relative wall thickness, ejection fraction, apical conicity, and longitudinal shortening) were evaluated using cardiac magnetic resonance images of 300 patients with myocardial infarction, and 1991 asymptomatic subjects, obtained from the Cardiac Atlas Project. Partial least squares (PLS) regression of left ventricular shape models resulted in remodeling components that were optimally associated with each remodeling index. A Gram–Schmidt orthogonalization process, by which remodeling components were successively removed from the shape space in the order of shape variance explained, resulted in a set of orthonormal remodeling components. Remodeling scores could then be calculated that quantify the amount of each remodeling component present in each case. A one-factor PLS regression led to more decoupling between scores from the different remodeling components across the entire cohort, and zero correlation between clinical indices and subsequent scores. Conclusions: The PLS orthogonal remodeling components had similar power to describe differences between myocardial infarction patients and asymptomatic subjects as principal component analysis, but were better associated with well-understood clinical indices of cardiac remodeling. The data and analyses are available from www.cardiacatlas.org. PMID:28327972

  2. Bi-orthogonality relations for fluid-filled elastic cylindrical shells: Theory, generalisations and application to construct tailored Green's matrices

    NASA Astrophysics Data System (ADS)

    Ledet, Lasse S.; Sorokin, Sergey V.

    2018-03-01

    The paper addresses the classical problem of time-harmonic forced vibrations of a fluid-filled cylindrical shell considered as a multi-modal waveguide carrying infinitely many waves. The forced vibration problem is solved using tailored Green's matrices formulated in terms of eigenfunction expansions. The formulation of Green's matrix is based on special (bi-)orthogonality relations between the eigenfunctions, which are derived here for the fluid-filled shell. Further, the relations are generalised to any multi-modal symmetric waveguide. Using the orthogonality relations the transcendental equation system is converted into algebraic modal equations that can be solved analytically. Upon formulation of Green's matrices the solution space is studied in terms of completeness and convergence (uniformity and rate). Special features and findings exposed only through this modal decomposition method are elaborated and the physical interpretation of the bi-orthogonality relation is discussed in relation to the total energy flow which leads to derivation of simplified equations for the energy flow components.

  3. Fast PSP measurements of wall-pressure fluctuation in low-speed flows: improvements using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Peng, Di; Wang, Shaofei; Liu, Yingzheng

    2016-04-01

    Fast pressure-sensitive paint (PSP) is very useful in flow diagnostics due to its fast response and high spatial resolution, but its applications in low-speed flows are usually challenging due to limitations of paint's pressure sensitivity and the capability of high-speed imagers. The poor signal-to-noise ratio in low-speed cases makes it very difficult to extract useful information from the PSP data. In this study, unsteady PSP measurements were made on a flat plate behind a cylinder in a low-speed wind tunnel (flow speed from 10 to 17 m/s). Pressure fluctuations (Δ P) on the plate caused by vortex-plate interaction were recorded continuously by fast PSP (using a high-speed camera) and a microphone array. Power spectrum of pressure fluctuations and phase-averaged Δ P obtained from PSP and microphone were compared, showing good agreement in general. Proper orthogonal decomposition (POD) was used to reduce noise in PSP data and extract the dominant pressure features. The PSP results reconstructed from selected POD modes were then compared to the pressure data obtained simultaneously with microphone sensors. Based on the comparison of both instantaneous Δ P and root-mean-square of Δ P, it was confirmed that POD analysis could effectively remove noise while preserving the instantaneous pressure information with good fidelity, especially for flows with strong periodicity. This technique extends the application range of fast PSP and can be a powerful tool for fundamental fluid mechanics research at low speed.

  4. Quantification of frequency-components contributions to the discharge of a karst spring

    NASA Astrophysics Data System (ADS)

    Taver, V.; Johannet, A.; Vinches, M.; Borrell, V.; Pistre, S.; Bertin, D.

    2013-12-01

    Karst aquifers represent important underground resources for water supplies, providing it to 25% of the population. Nevertheless such systems are currently underexploited because of their heterogeneity and complexity, which make work fields and physical measurements expensive, and frequently not representative of the whole aquifer. The systemic paradigm appears thus at a complementary approach to study and model karst aquifers in the framework of non-linear system analysis. Its input and output signals, namely rainfalls and discharge contain information about the function performed by the physical process. Therefore, improvement of knowledge about the karst system can be provided using time series analysis, for example Fourier analysis or orthogonal decomposition [1]. Another level of analysis consists in building non-linear models to identify rainfall/discharge relation, component by component [2]. In this context, this communication proposes to use neural networks to first model the rainfall-runoff relation using frequency components, and second to analyze the models, using the KnoX method [3], in order to quantify the importance of each component. Two different neural models were designed: (i) the recurrent model which implements a non-linear recurrent model fed by rainfalls, ETP and previous estimated discharge, (ii) the feed-forward model which implements a non-linear static model fed by rainfalls, ETP and previous observed discharges. The first model is known to better represent the rainfall-runoff relation; the second one to better predict the discharge based on previous discharge observations. KnoX method is based on a variable selection method, which simply considers values of parameters after the training without taking into account the non-linear behavior of the model during functioning. An amelioration of the KnoX method, is thus proposed in order to overcome this inadequacy. The proposed method, leads thus to both a hierarchization and a quantification of the input variables, here the frequency components, over output signal. Applied to the Lez karst aquifer, the combination of frequency decomposition and knowledge extraction improves knowledge on hydrological behavior. Both models and both extraction methods were applied and assessed using a fictitious reference model. Discussion is proposed in order to analyze efficiency of the methods compared to in situ measurements and tracing. [1] D. Labat et al. 'Rainfall-runoff relations for karst springs. Part II: continuous wavelet and discrete orthogonal multiresolution' In J of Hydrology, Vol. 238, 2000, pp. 149-178. [2] A. Johannet et al. 'Prediction of Lez Spring Discharge (Southern France) by Neural Networks using Orthogonal Wavelet Decomposition'.IJCNN Proceedings Brisbane 2012. [3] L. Kong A Siou et al. 'Modélisation hydrodynamique des karsts par réseaux de neurones : Comment dépasser la boîte noire. (Karst hydrodynamic modelling using artificial neural networks: how to surpass the black box ?)'. Proceedings of the 9th conference on limestone hydrogeology,2011 Besançon, France.

  5. On Certain Theoretical Developments Underlying the Hilbert-Huang Transform

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Petrick, David; Hestness, Phyllis

    2006-01-01

    One of the main traditional tools used in scientific and engineering data spectral analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as being linear and stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectral analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposed data, the HHT allows spectral analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real-value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a nearly orthogonal derived from the data (adaptive) basis. The IMFs can be further analyzed for spectrum content by using the classical Hilbert Transform. A new engineering spectral analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications pose additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs nearly orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the development of new HHT processing options, such as real-time and 2-D processing using Field Programmable Gate Array (FPGA) computational resources,

  6. A Framework for Detecting Glaucomatous Progression in the Optic Nerve Head of an Eye using Proper Orthogonal Decomposition

    PubMed Central

    Balasubramanian, Madhusudhanan; Žabić, Stanislav; Bowd, Christopher; Thompson, Hilary W.; Wolenski, Peter; Iyengar, S. Sitharama; Karki, Bijaya B.; Zangwill, Linda M.

    2009-01-01

    Glaucoma is the second leading cause of blindness worldwide. Often the optic nerve head (ONH) glaucomatous damage and ONH changes occur prior to visual field loss and are observable in vivo. Thus, digital image analysis is a promising choice for detecting the onset and/or progression of glaucoma. In this work, we present a new framework for detecting glaucomatous changes in the ONH of an eye using the method of proper orthogonal decomposition (POD). A baseline topograph subspace was constructed for each eye to describe the structure of the ONH of the eye at a reference/baseline condition using POD. Any glaucomatous changes in the ONH of the eye present during a follow-up exam were estimated by comparing the follow-up ONH topography with its baseline topograph subspace representation. Image correspondence measures of L1 and L2 norms, correlation, and image Euclidean distance (IMED) were used to quantify the ONH changes. An ONH topographic library built from the Louisiana State University Experimental Glaucoma study was used to evaluate the performance of the proposed method. The area under the receiver operating characteristic curves (AUC) were used to compare the diagnostic performance of the POD induced parameters with the parameters of Topographic Change Analysis (TCA) method. The IMED and L2 norm parameters in the POD framework provided the highest AUC of 0.94 at 10° field of imaging and 0.91 at 15° field of imaging compared to the TCA parameters with an AUC of 0.86 and 0.88 respectively. The proposed POD framework captures the instrument measurement variability and inherent structure variability and shows promise for improving our ability to detect glaucomatous change over time in glaucoma management. PMID:19369163

  7. Parallelization of PANDA discrete ordinates code using spatial decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.

    2006-07-01

    We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less

  8. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.

  9. A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis

    PubMed Central

    Wagatsuma, Hiroaki

    2017-01-01

    EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies. PMID:28194221

  10. Orthogonal decomposition of left ventricular remodeling in myocardial infarction.

    PubMed

    Zhang, Xingyu; Medrano-Gracia, Pau; Ambale-Venkatesh, Bharath; Bluemke, David A; Cowan, Brett R; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Young, Alistair A; Suinesiaputra, Avan

    2017-03-01

    Left ventricular size and shape are important for quantifying cardiac remodeling in response to cardiovascular disease. Geometric remodeling indices have been shown to have prognostic value in predicting adverse events in the clinical literature, but these often describe interrelated shape changes. We developed a novel method for deriving orthogonal remodeling components directly from any (moderately independent) set of clinical remodeling indices. Six clinical remodeling indices (end-diastolic volume index, sphericity, relative wall thickness, ejection fraction, apical conicity, and longitudinal shortening) were evaluated using cardiac magnetic resonance images of 300 patients with myocardial infarction, and 1991 asymptomatic subjects, obtained from the Cardiac Atlas Project. Partial least squares (PLS) regression of left ventricular shape models resulted in remodeling components that were optimally associated with each remodeling index. A Gram-Schmidt orthogonalization process, by which remodeling components were successively removed from the shape space in the order of shape variance explained, resulted in a set of orthonormal remodeling components. Remodeling scores could then be calculated that quantify the amount of each remodeling component present in each case. A one-factor PLS regression led to more decoupling between scores from the different remodeling components across the entire cohort, and zero correlation between clinical indices and subsequent scores. The PLS orthogonal remodeling components had similar power to describe differences between myocardial infarction patients and asymptomatic subjects as principal component analysis, but were better associated with well-understood clinical indices of cardiac remodeling. The data and analyses are available from www.cardiacatlas.org. © The Author 2017. Published by Oxford University Press.

  11. Swirl ratio effects on tornado-like vortices

    NASA Astrophysics Data System (ADS)

    Hashemi-Tari, Pooyan; Gurka, Roi; Hangen, Horia

    2007-11-01

    The effect of swirl ratio on the flow field for a tornado-like vortex simulator (TVS) is investigated. Different swirl ratios are obtained by changing the geometry and tangential velocity which determine the vortex evolution. Flow visualizations, surface pressure and Particle Image Velocimetry (PIV) measurements are performed in a small TVS for swirl ratios S between 0 and 1. The PIV data was acquired for two orthogonal planes: normal and parallel to the solid boundary at several height locations. The ratio between the angular momentum and the radial momentum which characterize the swirl ratio is investigated. Statistical analysis to the turbulent field is performed by mean and rms profiles of the velocity, stresses and vorticity are presented. A Proper Orthogonal Decomposition (POD) is performed on the vorticity field. The results are used to: (i) provide a relation between these 3 sets of qualitative and quantitative measurements and the swirl ratio in an attempt to relate the fluid dynamics parameters to the forensic, Fujita scale, and (ii) understand the spatio-temporal distribution of the most energetic POD modes in a tornado-like vortex.

  12. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  13. Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2013-01-01

    The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.

  14. Acoustics flow analysis in circular duct using sound intensity and dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Weyna, S.

    2014-08-01

    Sound intensity generation in hard-walled duct with acoustic flow (no mean-flow) is treated experimentally and shown graphically. In paper, numerous methods of visualization illustrating the vortex flow (2D, 3D) can graphically explain diffraction and scattering phenomena occurring inside the duct and around open end area. Sound intensity investigation in annular duct gives a physical picture of sound waves in any duct mode. In the paper, modal energy analysis are discussed with particular reference to acoustics acoustic orthogonal decomposition (AOD). The image of sound intensity fields before and above "cut-off" frequency region are found to compare acoustic modes which might resonate in duct. The experimental results show also the effects of axial and swirling flow. However acoustic field is extremely complicated, because pressures in non-propagating (cut-off) modes cooperate with the particle velocities in propagating modes, and vice versa. Measurement in cylindrical duct demonstrates also the cut-off phenomenon and the effect of reflection from open end. The aim of experimental study was to obtain information on low Mach number flows in ducts in order to improve physical understanding and validate theoretical CFD and CAA models that still may be improved.

  15. Development of a New Methodology for Computing Surface Sensible Heat Fluxes using Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Morrison, T. J.; Calaf, M.; Fernando, H. J.; Price, T. A.; Pardyjak, E.

    2017-12-01

    Current numerical weather predication models utilize similarity to characterize momentum, moisture, and heat fluxes. Such formulations are only valid under the ideal assumptions of spatial homogeneity, statistical stationary, and zero subsidence. However, recent surface temperature measurements from the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program on the Salt Flats of Utah's West desert, show that even under the most a priori ideal conditions, heterogeneity of the aforementioned variables exists. We present a new method to extract spatially-distributed measurements of surface sensible heat flux from thermal imagery. The approach consists of using a surface energy budget, where the ground heat flux is easily computed from limited measurements using a force-restore-type methodology, the latent heat fluxes are neglected, and the energy storage is computed using a lumped capacitance model. Preliminary validation of the method is presented using experimental data acquired from a nearby sonic anemometer during the MATERHORN campaign. Additional evaluation is required to confirm the method's validity. Further decomposition analysis of on-site instrumentation (thermal camera, cold-hotwire probes, and sonic anemometers) using Proper Orthogonal Decomposition (POD), and wavelet analysis, reveals time scale similarity between the flow and surface fluctuations.

  16. Orthogonal recursive bisection data decomposition for high performance computing in cardiac model simulations: dependence on anatomical geometry.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J

    2009-01-01

    Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.

  17. Hilbert complexes of nonlinear elasticity

    NASA Astrophysics Data System (ADS)

    Angoshtari, Arzhang; Yavari, Arash

    2016-12-01

    We introduce some Hilbert complexes involving second-order tensors on flat compact manifolds with boundary that describe the kinematics and the kinetics of motion in nonlinear elasticity. We then use the general framework of Hilbert complexes to write Hodge-type and Helmholtz-type orthogonal decompositions for second-order tensors. As some applications of these decompositions in nonlinear elasticity, we study the strain compatibility equations of linear and nonlinear elasticity in the presence of Dirichlet boundary conditions and the existence of stress functions on non-contractible bodies. As an application of these Hilbert complexes in computational mechanics, we briefly discuss the derivation of a new class of mixed finite element methods for nonlinear elasticity.

  18. High-frequency Total Focusing Method (TFM) imaging in strongly attenuating materials with the decomposition of the time reversal operator associated with orthogonal coded excitations

    NASA Astrophysics Data System (ADS)

    Villaverde, Eduardo Lopez; Robert, Sébastien; Prada, Claire

    2017-02-01

    In the present work, the Total Focusing Method (TFM) is used to image defects in a High Density Polyethylene (HDPE) pipe. The viscoelastic attenuation of this material corrupts the images with a high electronic noise. In order to improve the image quality, the Decomposition of the Time Reversal Operator (DORT) filtering is combined with spatial Walsh-Hadamard coded transmissions before calculating the images. Experiments on a complex HDPE joint demonstrate that this method improves the signal-to-noise ratio by more than 40 dB in comparison with the conventional TFM.

  19. Wavelet-bounded empirical mode decomposition for measured time series analysis

    NASA Astrophysics Data System (ADS)

    Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2018-01-01

    Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.

  20. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  1. Assessing the Transient Gust Response of a Representative Ship Airwake using Proper Orthogonal Decomposition

    DTIC Science & Technology

    Velocimetry system was then used to acquire flow field data across a series of three horizontal planes spanning from 0.25 to 1.5 times the ship hangar height...included six separate data points at gust-frequency referenced Strouhal numbers ranging from 0.430 to1.474. A 725-Hertz time -resolved Particle Image

  2. Characterization of Flow Dynamics and Reduced-Order Description of Experimental Two-Phase Pipe Flow

    NASA Astrophysics Data System (ADS)

    Viggiano, Bianca; SkjæRaasen, Olaf; Tutkun, Murat; Cal, Raul Bayoan

    2017-11-01

    Multiphase pipe flow is investigated using proper orthogonal decomposition for tomographic X-ray data, where holdup, cross sectional phase distributions and phase interface characteristics are obtained. Instantaneous phase fractions of dispersed flow and slug flow are analyzed and a reduced order dynamical description is generated. The dispersed flow displays coherent structures in the first few modes near the horizontal center of the pipe, representing the liquid-liquid interface location while the slug flow case shows coherent structures that correspond to the cyclical formation and breakup of the slug in the first 10 modes. The reconstruction of the fields indicate that main features are observed in the low order dynamical descriptions utilizing less than 1 % of the full order model. POD temporal coefficients a1, a2 and a3 show interdependence for the slug flow case. The coefficients also describe the phase fraction holdup as a function of time for both dispersed and slug flow. These flows are highly applicable to petroleum transport pipelines, hydroelectric power and heat exchanger tubes to name a few. The mathematical representations obtained via proper orthogonal decomposition will deepen the understanding of fundamental multiphase flow characteristics.

  3. Geometric decompositions of collective motion

    NASA Astrophysics Data System (ADS)

    Mischiati, Matteo; Krishnaprasad, P. S.

    2017-04-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.

  4. Geometric decompositions of collective motion

    PubMed Central

    Krishnaprasad, P. S.

    2017-01-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes—including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots. PMID:28484319

  5. Actuation for simultaneous motions and constraining efforts: an open chain example

    NASA Astrophysics Data System (ADS)

    Perreira, N. Duke

    1997-06-01

    A brief discussion on systems where simultaneous control of forces and velocities are desirable is given and an example linkage with revolute and prismatic joint is selected for further analysis. The Newton-Euler approach for dynamic system analysis is applied to the example to provide a basis of comparison. Gauge invariant transformations are used to convert the dynamic equations into invariant form suitable for use in a new dynamic system analysis method known as the motion-effort approach. This approach uses constraint elimination techniques based on singular value decompositions to recast the invariant form of dynamic system equations into orthogonal sets of motion and effort equations. Desired motions and constraining efforts are partitioned into ideally obtainable and unobtainable portions which are then used to determine the required actuation. The method is applied to the example system and an analytic estimate to its success is made.

  6. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  7. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  8. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. Separating Putative Pathogens from Background Contamination with Principal Orthogonal Decomposition: Evidence for Leptospira in the Ugandan Neonatal Septisome

    PubMed Central

    Schiff, Steven J.; Kiwanuka, Julius; Riggio, Gina; Nguyen, Lan; Mu, Kevin; Sproul, Emily; Bazira, Joel; Mwanga-Amumpaire, Juliet; Tumusiime, Dickson; Nyesigire, Eunice; Lwanga, Nkangi; Bogale, Kaleb T.; Kapur, Vivek; Broach, James R.; Morton, Sarah U.; Warf, Benjamin C.; Poss, Mary

    2016-01-01

    Neonatal sepsis (NS) is responsible for over 1 million yearly deaths worldwide. In the developing world, NS is often treated without an identified microbial pathogen. Amplicon sequencing of the bacterial 16S rRNA gene can be used to identify organisms that are difficult to detect by routine microbiological methods. However, contaminating bacteria are ubiquitous in both hospital settings and research reagents and must be accounted for to make effective use of these data. In this study, we sequenced the bacterial 16S rRNA gene obtained from blood and cerebrospinal fluid (CSF) of 80 neonates presenting with NS to the Mbarara Regional Hospital in Uganda. Assuming that patterns of background contamination would be independent of pathogenic microorganism DNA, we applied a novel quantitative approach using principal orthogonal decomposition to separate background contamination from potential pathogens in sequencing data. We designed our quantitative approach contrasting blood, CSF, and control specimens and employed a variety of statistical random matrix bootstrap hypotheses to estimate statistical significance. These analyses demonstrate that Leptospira appears present in some infants presenting within 48 h of birth, indicative of infection in utero, and up to 28 days of age, suggesting environmental exposure. This organism cannot be cultured in routine bacteriological settings and is enzootic in the cattle that often live in close proximity to the rural peoples of western Uganda. Our findings demonstrate that statistical approaches to remove background organisms common in 16S sequence data can reveal putative pathogens in small volume biological samples from newborns. This computational analysis thus reveals an important medical finding that has the potential to alter therapy and prevention efforts in a critically ill population. PMID:27379237

  10. Signal detection by means of orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Hajdu, C. F.; Dabóczi, T.; Péceli, G.; Zamantzas, C.

    2018-03-01

    Matched filtering is a well-known method frequently used in digital signal processing to detect the presence of a pattern in a signal. In this paper, we suggest a time variant matched filter, which, unlike a regular matched filter, maintains a given alignment between the input signal and the template carrying the pattern, and can be realized recursively. We introduce a method to synchronize the two signals for presence detection, usable in case direct synchronization between the signal generator and the receiver is not possible or not practical. We then propose a way of realizing and extending the same filter by modifying a recursive spectral observer, which gives rise to orthogonal filter channels and also leads to another way to synchronize the two signals.

  11. A Generalized Framework for Reduced-Order Modeling of a Wind Turbine Wake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc

    A reduced-order model for a wind turbine wake is sought from large eddy simulation data. Fluctuating velocity fields are combined in the correlation tensor to form the kernel of the proper orthogonal decomposition (POD). Proper orthogonal decomposition modes resulting from the decomposition represent the spatially coherent turbulence structures in the wind turbine wake; eigenvalues delineate the relative amount of turbulent kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces dynamic coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a seriesmore » of polynomial parameters that quantify mode interaction and the evolution of each POD mode coefficients. The resulting system of ordinary differential equations models the wind turbine wake composed only of the large-scale turbulent dynamics identified by the POD. Tikhonov regularization is used to recalibrate the dynamical system by adding additional constraints to the minimization seeking polynomial parameters, reducing error in the modeled mode coefficients. The wakeROM is periodically reinitialized with new initial conditions found by relating the incoming turbulent velocity to the POD mode coefficients through a series of open-loop transfer functions. The wakeROM reproduces mode coefficients to within 25.2%, quantified through the normalized root-mean-square error. A high-level view of the modeling approach is provided as a platform to discuss promising research directions, alternate processes that could benefit stability and efficiency, and desired extensions of the wakeROM.« less

  12. Fractal dimension of spatially extended systems

    NASA Astrophysics Data System (ADS)

    Torcini, A.; Politi, A.; Puccioni, G. P.; D'Alessandro, G.

    1991-10-01

    Properties of the invariant measure are numerically investigated in 1D chains of diffusively coupled maps. The coarse-grained fractal dimension is carefully computed in various embedding spaces, observing an extremely slow convergence towards the asymptotic value. This is in contrast with previous simulations, where the analysis of an insufficient number of points led the authors to underestimate the increase of fractal dimension with increasing the dimension of the embedding space. Orthogonal decomposition is also performed confirming that the slow convergence is intrinsically related to local nonlinear properties of the invariant measure. Finally, the Kaplan-Yorke conjecture is tested for short chains, showing that, despite the noninvertibility of the dynamical system, a good agreement is found between Lyapunov dimension and information dimension.

  13. Gibbsian Stationary Non-equilibrium States

    NASA Astrophysics Data System (ADS)

    De Carlo, Leonardo; Gabrielli, Davide

    2017-09-01

    We study the structure of stationary non-equilibrium states for interacting particle systems from a microscopic viewpoint. In particular we discuss two different discrete geometric constructions. We apply both of them to determine non reversible transition rates corresponding to a fixed invariant measure. The first one uses the equivalence of this problem with the construction of divergence free flows on the transition graph. Since divergence free flows are characterized by cyclic decompositions we can generate families of models from elementary cycles on the configuration space. The second construction is a functional discrete Hodge decomposition for translational covariant discrete vector fields. According to this, for example, the instantaneous current of any interacting particle system on a finite torus can be canonically decomposed in a gradient part, a circulation term and an harmonic component. All the three components are associated with functions on the configuration space. This decomposition is unique and constructive. The stationary condition can be interpreted as an orthogonality condition with respect to an harmonic discrete vector field and we use this decomposition to construct models having a fixed invariant measure.

  14. Low-dimensional modelling of a transient cylinder wake using double proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Siegel, Stefan G.; Seidel, J.?Rgen; Fagley, Casey; Luchtenburg, D. M.; Cohen, Kelly; McLaughlin, Thomas

    For the systematic development of feedback flow controllers, a numerical model that captures the dynamic behaviour of the flow field to be controlled is required. This poses a particular challenge for flow fields where the dynamic behaviour is nonlinear, and the governing equations cannot easily be solved in closed form. This has led to many versions of low-dimensional modelling techniques, which we extend in this work to represent better the impact of actuation on the flow. For the benchmark problem of a circular cylinder wake in the laminar regime, we introduce a novel extension to the proper orthogonal decomposition (POD) procedure that facilitates mode construction from transient data sets. We demonstrate the performance of this new decomposition by applying it to a data set from the development of the limit cycle oscillation of a circular cylinder wake simulation as well as an ensemble of transient forced simulation results. The modes obtained from this decomposition, which we refer to as the double POD (DPOD) method, correctly track the changes of the spatial modes both during the evolution of the limit cycle and when forcing is applied by transverse translation of the cylinder. The mode amplitudes, which are obtained by projecting the original data sets onto the truncated DPOD modes, can be used to construct a dynamic mathematical model of the wake that accurately predicts the wake flow dynamics within the lock-in region at low forcing amplitudes. This low-dimensional model, derived using nonlinear artificial neural network based system identification methods, is robust and accurate and can be used to simulate the dynamic behaviour of the wake flow. We demonstrate this ability not just for unforced and open-loop forced data, but also for a feedback-controlled simulation that leads to a 90% reduction in lift fluctuations. This indicates the possibility of constructing accurate dynamic low-dimensional models for feedback control by using unforced and transient forced data only.

  15. On the estimation of physical height changes using GRACE satellite mission data - A case study of Central Europe

    NASA Astrophysics Data System (ADS)

    Godah, Walyeldeen; Szelachowska, Małgorzata; Krynski, Jan

    2017-12-01

    The dedicated gravity satellite missions, in particular the GRACE (Gravity Recovery and Climate Experiment) mission launched in 2002, provide unique data for studying temporal variations of mass distribution in the Earth's system, and thereby, the geometry and the gravity fi eld changes of the Earth. The main objective of this contribution is to estimate physical height (e.g. the orthometric/normal height) changes over Central Europe using GRACE satellite mission data as well as to analyse them and model over the selected study area. Physical height changes were estimated from temporal variations of height anomalies and vertical displacements of the Earth surface being determined over the investigated area. The release 5 (RL05) GRACE-based global geopotential models as well as load Love numbers from the Preliminary Reference Earth Model (PREM) were used as input data. Analysis of the estimated physical height changes and their modelling were performed using two methods: the seasonal decomposition method and the PCA/ EOF (Principal Component Analysis/Empirical Orthogonal Function) method and the differences obtained were discussed. The main fi ndings reveal that physical height changes over the selected study area reach up to 22.8 mm. The obtained physical height changes can be modelled with an accuracy of 1.4 mm using the seasonal decomposition method.

  16. Developing a Complex Independent Component Analysis (CICA) Technique to Extract Non-stationary Patterns from Geophysical Time Series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael

    2017-12-01

    In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.

  17. Two fast approximate wavelet algorithms for image processing, classification, and recognition

    NASA Astrophysics Data System (ADS)

    Wickerhauser, Mladen V.

    1994-07-01

    We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.

  18. Reduced-order model for underwater target identification using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ramesh, Sai Sudha; Lim, Kian Meng

    2017-03-01

    Research on underwater acoustics has seen major development over the past decade due to its widespread applications in domains such as underwater communication/navigation (SONAR), seismic exploration and oceanography. In particular, acoustic signatures from partially or fully buried targets can be used in the identification of buried mines for mine counter measures (MCM). Although there exist several techniques to identify target properties based on SONAR images and acoustic signatures, these methods first employ a feature extraction method to represent the dominant characteristics of a data set, followed by the use of an appropriate classifier based on neural networks or the relevance vector machine. The aim of the present study is to demonstrate the applications of proper orthogonal decomposition (POD) technique in capturing dominant features of a set of scattered pressure signals, and subsequent use of the POD modes and coefficients in the identification of partially buried underwater target parameters such as its location, size and material density. Several numerical examples are presented to demonstrate the performance of the system identification method based on POD. Although the present study is based on 2D acoustic model, the method can be easily extended to 3D models and thereby enables cost-effective representations of large-scale data.

  19. Sea level reconstructions from altimetry and tide gauges using independent component analysis

    NASA Astrophysics Data System (ADS)

    Brunnabend, Sandra-Esther; Kusche, Jürgen; Forootan, Ehsan

    2017-04-01

    Many reconstructions of global and regional sea level rise derived from tide gauges and satellite altimetry used the method of empirical orthogonal functions (EOF) to reduce noise, improving the spatial resolution of the reconstructed outputs and investigate the different signals in climate time series. However, the second order EOF method has some limitations, e.g. in the separation of individual physical signals into different modes of sea level variations and in the capability to physically interpret the different modes as they are assumed to be orthogonal. Therefore, we investigate the use of the more advanced statistical signal decomposition technique called independent component analysis (ICA) to reconstruct global and regional sea level change from satellite altimetry and tide gauge records. Our results indicate that the used method has almost no influence on the reconstruction of global mean sea level change (1.6 mm/yr from 1960-2010 and 2.9 mm/yr from 1993-2013). Only different numbers of modes are needed for the reconstruction. Using the ICA method is advantageous for separating independent climate variability signals from regional sea level variations as the mixing problem of the EOF method is strongly reduced. As an example, the modes most dominated by the El Niño-Southern Oscillation (ENSO) signal are compared. Regional sea level changes near Tianjin, China, Los Angeles, USA, and Majuro, Marshall Islands are reconstructed and the contributions from ENSO are identified.

  20. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity so that they are being frequently employed for specific real world applications within NASA. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by highly complex geometries. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the peculiarities of applying the immersed boundary method to this moving boundary problem, we will provide a detailed aeroacoustic analysis of the noise generation mechanisms encountered in the open rotor flow. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. The noise generation mechanisms are analyzed employing spectral analysis, proper orthogonal decomposition and the causality method.

  1. Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling

    DOE PAGES

    Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...

    2014-07-14

    Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less

  2. Transition of cavitating flow to supercavitation within Venturi nozzle - hysteresis investigation

    NASA Astrophysics Data System (ADS)

    Jiří, Kozák; Pavel, Rudolf; Rostislav, Huzlík; Martin, Hudec; Radomír, Chovanec; Ondřej, Urban; Blahoslav, Maršálek; Eliška, Maršálková; František, Pochylý; David, Štefan

    Cavitation is usually considered as undesirable phenomena. On the other hand, it can be utilized in many applications. One of the technical applications is using cavitation in water treatment, where hydrodynamic cavitation seems to be effective way how to reduce cyanobacteria within large bulks of water. The main scope of this paper is investigation of the cavitation within Venturi nozzle during the transition from fully developed cavitation to supercavitation regime and vice versa. Dynamics of cavitation was investigated using experimental data of pressure pulsations and analysis of high speed videos, where FFT of the pixel intensity and Proper Orthogonal Decomposition (POD) of the records were done to identify dominant frequencies connected with the presence of cavitation. The methodology of the high speed (HS) records semiautomated analysis using the FFT was described. Obtained results were correlated and above that the possible presence of hysteresis was discussed.

  3. Evaluation of a Singular Value Decomposition Approach for Impact Dynamic Data Correlation

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Lyle, Karen H.; Lessard, Wendy B.

    2003-01-01

    Impact dynamic tests are used in the automobile and aircraft industries to assess survivability of occupants during crash, to assert adequacy of the design, and to gain federal certification. Although there is no substitute for experimental tests, analytical models are often developed and used to study alternate test conditions, to conduct trade-off studies, and to improve designs. To validate results from analytical predictions, test and analysis results must be compared to determine the model adequacy. The mathematical approach evaluated in this paper decomposes observed time responses into dominant deformation shapes and their corresponding contribution to the measured response. To correlate results, orthogonality of test and analysis shapes is used as a criterion. Data from an impact test of a composite fuselage is used and compared to finite element predictions. In this example, the impact response was decomposed into multiple shapes but only two dominant shapes explained over 85% of the measured response

  4. Power system frequency estimation based on an orthogonal decomposition method

    NASA Astrophysics Data System (ADS)

    Lee, Chih-Hung; Tsai, Men-Shen

    2018-06-01

    In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.

  5. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  6. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  7. Variance decomposition in stochastic simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less

  8. Characteristic eddy decomposition of turbulence in a channel

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Moser, Robert D.

    1991-01-01

    The proper orthogonal decomposition technique (Lumley's decomposition) is applied to the turbulent flow in a channel to extract coherent structures by decomposing the velocity field into characteristic eddies with random coefficients. In the homogeneous spatial directions, a generaliztion of the shot-noise expansion is used to determine the characteristic eddies. In this expansion, the Fourier coefficients of the characteristic eddy cannot be obtained from the second-order statistics. Three different techniques are used to determine the phases of these coefficients. They are based on: (1) the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Results from these three techniques are found to be similar in most respects. The implications of these techniques and the shot-noise expansion are discussed. The dominant eddy is found to contribute as much as 76 percent to the turbulent kinetic energy. In both 2D and 3D, the characteristic eddies consist of an ejection region straddled by streamwise vortices that leave the wall in the very short streamwise distance of about 100 wall units.

  9. Use of the wavelet transform to investigate differences in brain PET images between patient groups

    NASA Astrophysics Data System (ADS)

    Ruttimann, Urs E.; Unser, Michael A.; Rio, Daniel E.; Rawlings, Robert R.

    1993-06-01

    Suitability of the wavelet transform was studied for the analysis of glucose utilization differences between subject groups as displayed in PET images. To strengthen statistical inference, it was of particular interest investigating the tradeoff between signal localization and image decomposition into uncorrelated components. This tradeoff is shown to be controlled by wavelet regularity, with the optimal compromise attained by third-order orthogonal spline wavelets. Testing of the ensuing wavelet coefficients identified only about 1.5% as statistically different (p < .05) from noise, which then served to resynthesize the difference images by the inverse wavelet transform. The resulting images displayed relatively uniform, noise-free regions of significant differences with, due to the good localization maintained by the wavelets, very little reconstruction artifacts.

  10. Reflection of Lamb waves obliquely incident on the free edge of a plate.

    PubMed

    Santhanam, Sridhar; Demirli, Ramazan

    2013-01-01

    The reflection of obliquely incident symmetric and anti-symmetric Lamb wave modes at the edge of a plate is studied. Both in-plane and Shear-Horizontal (SH) reflected wave modes are spawned by an obliquely incident in-plane Lamb wave mode. Energy reflection coefficients are calculated for the reflected wave modes as a function of frequency and angle of incidence. This is done by using the method of orthogonal mode decomposition and by enforcing traction free conditions at the plate edge using the method of collocation. A PZT sensor network, affixed to an Aluminum plate, is used to experimentally verify the predictions of the analysis. Experimental results provide support for the analytically determined results. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Identification of Coherent Structure Dynamics in Wall-Bounded Sprays using Proper Orthogonal Decomposition

    DTIC Science & Technology

    2010-08-31

    Wall interaction of sprays emanating from Gas Centered Swirl Coaxial (GCSC) injectors were experimentally studied as a part of this ten-week project. A...American Society of Engineering Education (ASEE) Dated August 31st 2010 Abstract Wall interaction of sprays emanating from Gas Centered...Edwards Air Force Base (AFRL/EAFB) have documented atomization characteristics of a Gas -Centered Swirl Coaxial (GCSC) injector [1-2], in which the

  12. Radar Measurements of Ocean Surface Waves using Proper Orthogonal Decomposition

    DTIC Science & Technology

    2017-03-30

    rely on use of Fourier transforms (FFT) and filtering spectra on the linear dispersion relationship for ocean surface waves. This report discusses...the measured signal (e.g., Young et al., 1985). In addition, the methods often rely on filtering the FFT of radar backscatter or Doppler velocities...to those obtained with conventional FFT and dispersion curve filtering techniques (iv) Compare both results of(iii) to ground truth sensors (i .e

  13. Development of a suitcase time-of-flight mass spectrometer for in situ fault diagnosis of SF6 -insulated switchgear by detection of decomposition products.

    PubMed

    Hou, Keyong; Li, Jinxu; Qu, Tuanshuai; Tang, Bin; Zhu, Liping; Huang, Yunguang; Li, Haiyang

    2016-08-01

    Sulfur hexafluoride (SF6 ) gas-insulated switchgear (GIS) is an essential piece of electrical equipment in a substation, and the concentration of the SF6 decomposition products are directly relevant to the security and reliability of the substation. The detection of SF6 decomposition products can be used to diagnosis the condition of the GIS. The decomposition products of SO2 , SO2 F2 , and SOF2 were selected as indicators for the diagnosis. A suitcase time-of-flight mass spectrometer (TOFMS) was designed to perform online GIS failure analysis. An RF VUV lamp was used as the photoelectron ion source; the sampling inlet, ion einzel lens, and vacuum system were well designed to improve the performance. The limit of detection (LOD) of SO2 and SO2 F2 within 200 s was 1 ppm, and the sensitivity was estimated to be at least 10-fold more sensitive than the previous design. The high linearity of SO2 , SO2 F2 in the range of 5-100 ppm has excellent linear correlation coefficient R(2) at 0.9951 and 0.9889, respectively. The suitcase TOFMS using orthogonal acceleration and reflecting mass analyzer was developed. It has the size of 663 × 496 × 338 mm and a weight of 34 kg including the battery and consumes only 70 W. The suitcase TOFMS was applied to analyze real decomposition products of SF6 inside a GIS and succeeded in finding out the hidden dangers. The suitcase TOFMS has wide application prospects for establishing an early-warning for the failure of the GIS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Towards reduced order modelling for predicting the dynamics of coherent vorticity structures within wind turbine wakes

    NASA Astrophysics Data System (ADS)

    Debnath, M.; Santoni, C.; Leonardi, S.; Iungo, G. V.

    2017-03-01

    The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator. This article is part of the themed issue 'Wind energy in complex terrains'.

  15. On bipartite pure-state entanglement structure in terms of disentanglement

    NASA Astrophysics Data System (ADS)

    Herbut, Fedor

    2006-12-01

    Schrödinger's disentanglement [E. Schrödinger, Proc. Cambridge Philos. Soc. 31, 555 (1935)], i.e., remote state decomposition, as a physical way to study entanglement, is carried one step further with respect to previous work in investigating the qualitative side of entanglement in any bipartite state vector. Remote measurement (or, equivalently, remote orthogonal state decomposition) from previous work is generalized to remote linearly independent complete state decomposition both in the nonselective and the selective versions. The results are displayed in terms of commutative square diagrams, which show the power and beauty of the physical meaning of the (antiunitary) correlation operator inherent in the given bipartite state vector. This operator, together with the subsystem states (reduced density operators), constitutes the so-called correlated subsystem picture. It is the central part of the antilinear representation of a bipartite state vector, and it is a kind of core of its entanglement structure. The generalization of previously elaborated disentanglement expounded in this article is a synthesis of the antilinear representation of bipartite state vectors, which is reviewed, and the relevant results of [Cassinelli et al., J. Math. Anal. Appl. 210, 472 (1997)] in mathematical analysis, which are summed up. Linearly independent bases (finite or infinite) are shown to be almost as useful in some quantum mechanical studies as orthonormal ones. Finally, it is shown that linearly independent remote pure-state preparation carries the highest probability of occurrence. This singles out linearly independent remote influence from all possible ones.

  16. A non-orthogonal decomposition of flows into discrete events

    NASA Astrophysics Data System (ADS)

    Boxx, Isaac; Lewalle, Jacques

    1998-11-01

    This work is based on the formula for the inverse Hermitian wavelet transform. A signal can be interpreted as a (non-unique) superposition of near-singular, partially overlapping events arising from Dirac functions and/or its derivatives combined with diffusion.( No dynamics implied: dimensionless diffusion is related to the definition of the analyzing wavelets.) These events correspond to local maxima of spectral energy density. We successfully fitted model events of various orders on a succession of fields, ranging from elementary signals to one-dimensional hot-wire traces. We document edge effects, event overlap and its implications on the algorithm. The interpretation of the discrete singularities as flow events (such as coherent structures) and the fundamental non-uniqueness of the decomposition are discussed. The dynamics of these events will be examined in the companion paper.

  17. Non invasive transcostal focusing based on the decomposition of the time reversal operator: in vitro validation

    NASA Astrophysics Data System (ADS)

    Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias

    2010-03-01

    Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.

  18. Catalytic spectrophotometric determination of iodine in coal by pyrohydrolysis decomposition.

    PubMed

    Wu, Daishe; Deng, Haiwen; Wang, Wuyi; Xiao, Huayun

    2007-10-10

    A method for the determination of iodine in coal using pyrohydrolysis for sample decomposition was proposed. A pyrohydrolysis apparatus system was constructed, and the procedure was designed to burn and hydrolyse coal steadily and completely. The parameters of pyrohydrolysis were optimized through the orthogonal experimental design. Iodine in the absorption solution was evaluated by the catalytic spectrophotometric method, and the absorbance at 420 nm was measured by a double-beam UV-visible spectrophotometer. The limit of detection and quantification of the proposed method were 0.09 microg g(-1) and 0.29 microg g(-1), respectively. After analysing some Chinese soil reference materials (SRMs), a reasonable agreement was found between the measured values and the certified values. The accuracy of this approach was confirmed by the analysis of eight coals spiked with SRMs with an indexed recovery from 94.97 to 109.56%, whose mean value was 102.58%. Six repeated tests were conducted for eight coal samples, including high sulfur coal and high fluorine coal. A good repeatability was obtained with a relative standard deviation value from 2.88 to 9.52%, averaging 5.87%. With such benefits as simplicity, precision, accuracy and economy, this approach can meet the requirements of the limits of detection and quantification for analysing iodine in coal, and hence it is highly suitable for routine analysis.

  19. Sampling considerations for modal analysis with damping

    NASA Astrophysics Data System (ADS)

    Park, Jae Young; Wakin, Michael B.; Gilbert, Anna C.

    2015-03-01

    Structural health monitoring (SHM) systems are critical for monitoring aging infrastructure (such as buildings or bridges) in a cost-effective manner. Wireless sensor networks that sample vibration data over time are particularly appealing for SHM applications due to their flexibility and low cost. However, in order to extend the battery life of wireless sensor nodes, it is essential to minimize the amount of vibration data these sensors must collect and transmit. In recent work, we have studied the performance of the Singular Value Decomposition (SVD) applied to the collection of data and provided new finite sample analysis characterizing conditions under which this simple technique{also known as the Proper Orthogonal Decomposition (POD){can correctly estimate the mode shapes of the structure. Specifically, we provided theoretical guarantees on the number and duration of samples required in order to estimate a structure's mode shapes to a desired level of accuracy. In that previous work, however, we considered simplified Multiple-Degree-Of-Freedom (MDOF) systems with no damping. In this paper we consider MDOF systems with proportional damping and show that, with sufficiently light damping, the POD can continue to provide accurate estimates of a structure's mode shapes. We support our discussion with new analytical insight and experimental demonstrations. In particular, we study the tradeoffs between the level of damping, the sampling rate and duration, and the accuracy to which the structure's mode shapes can be estimated.

  20. Empirical Orthogonal Function (EOF) Analysis of Storm-Time GPS Total Electron Content Variations

    NASA Astrophysics Data System (ADS)

    Thomas, E. G.; Coster, A. J.; Zhang, S.; McGranaghan, R. M.; Shepherd, S. G.; Baker, J. B.; Ruohoniemi, J. M.

    2016-12-01

    Large perturbations in ionospheric density are known to occur during geomagnetic storms triggered by dynamic structures in the solar wind. These ionospheric storm effects have long attracted interest due to their impact on the propagation characteristics of radio wave communications. Over the last two decades, maps of vertically-integrated total electron content (TEC) based on data collected by worldwide networks of Global Positioning System (GPS) receivers have dramatically improved our ability to monitor the spatiotemporal dynamics of prominent storm-time features such as polar cap patches and storm enhanced density (SED) plumes. In this study, we use an empirical orthogonal function (EOF) decomposition technique to identify the primary modes of spatial and temporal variability in the storm-time GPS TEC response at midlatitudes over North America during more than 100 moderate geomagnetic storms from 2001-2013. We next examine the resulting time-varying principal components and their correlation with various geophysical indices and parameters in order to derive an analytical representation. Finally, we use a truncated reconstruction of the EOF basis functions and parameterization of the principal components to produce an empirical representation of the geomagnetic storm-time response of GPS TEC for all magnetic local times local times and seasons at midlatitudes in the North American sector.

  1. On Statistics of Bi-Orthogonal Eigenvectors in Real and Complex Ginibre Ensembles: Combining Partial Schur Decomposition with Supersymmetry

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.

    2018-06-01

    We suggest a method of studying the joint probability density (JPD) of an eigenvalue and the associated `non-orthogonality overlap factor' (also known as the `eigenvalue condition number') of the left and right eigenvectors for non-selfadjoint Gaussian random matrices of size {N× N} . First we derive the general finite N expression for the JPD of a real eigenvalue {λ} and the associated non-orthogonality factor in the real Ginibre ensemble, and then analyze its `bulk' and `edge' scaling limits. The ensuing distribution is maximally heavy-tailed, so that all integer moments beyond normalization are divergent. A similar calculation for a complex eigenvalue z and the associated non-orthogonality factor in the complex Ginibre ensemble is presented as well and yields a distribution with the finite first moment. Its `bulk' scaling limit yields a distribution whose first moment reproduces the well-known result of Chalker and Mehlig (Phys Rev Lett 81(16):3367-3370, 1998), and we provide the `edge' scaling distribution for this case as well. Our method involves evaluating the ensemble average of products and ratios of integer and half-integer powers of characteristic polynomials for Ginibre matrices, which we perform in the framework of a supersymmetry approach. Our paper complements recent studies by Bourgade and Dubach (The distribution of overlaps between eigenvectors of Ginibre matrices, 2018. arXiv:1801.01219).

  2. Parsimonious extreme learning machine using recursive orthogonal least squares.

    PubMed

    Wang, Ning; Er, Meng Joo; Han, Min

    2014-10-01

    Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.

  3. An adaptive proper orthogonal decomposition method for model order reduction of multi-disc rotor system

    NASA Astrophysics Data System (ADS)

    Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu

    2017-12-01

    The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.

  4. Localized Glaucomatous Change Detection within the Proper Orthogonal Decomposition Framework

    PubMed Central

    Balasubramanian, Madhusudhanan; Kriegman, David J.; Bowd, Christopher; Holst, Michael; Weinreb, Robert N.; Sample, Pamela A.; Zangwill, Linda M.

    2012-01-01

    Purpose. To detect localized glaucomatous structural changes using proper orthogonal decomposition (POD) framework with false-positive control that minimizes confirmatory follow-ups, and to compare the results to topographic change analysis (TCA). Methods. We included 167 participants (246 eyes) with ≥4 Heidelberg Retina Tomograph (HRT)-II exams from the Diagnostic Innovations in Glaucoma Study; 36 eyes progressed by stereo-photographs or visual fields. All other patient eyes (n = 210) were non-progressing. Specificities were evaluated using 21 normal eyes. Significance of change at each HRT superpixel between each follow-up and its nearest baseline (obtained using POD) was estimated using mixed-effects ANOVA. Locations with significant reduction in retinal height (red pixels) were determined using Bonferroni, Lehmann-Romano k-family-wise error rate (k-FWER), and Benjamini-Hochberg false discovery rate (FDR) type I error control procedures. Observed positive rate (OPR) in each follow-up was calculated as a ratio of number of red pixels within disk to disk size. Progression by POD was defined as one or more follow-ups with OPR greater than the anticipated false-positive rate. TCA was evaluated using the recently proposed liberal, moderate, and conservative progression criteria. Results. Sensitivity in progressors, specificity in normals, and specificity in non-progressors, respectively, were POD-Bonferroni = 100%, 0%, and 0%; POD k-FWER = 78%, 86%, and 43%; POD-FDR = 78%, 86%, and 43%; POD k-FWER with retinal height change ≥50 μm = 61%, 95%, and 60%; TCA-liberal = 86%, 62%, and 21%; TCA-moderate = 53%, 100%, and 70%; and TCA-conservative = 17%, 100%, and 84%. Conclusions. With a stronger control of type I errors, k-FWER in POD framework minimized confirmatory follow-ups while providing diagnostic accuracy comparable to TCA. Thus, POD with k-FWER shows promise to reduce the number of confirmatory follow-ups required for clinical care and studies evaluating new glaucoma treatments. (ClinicalTrials.gov number, NCT00221897.) PMID:22491406

  5. Useful lower limits to polarization contributions to intermolecular interactions using a minimal basis of localized orthogonal orbitals: theory and analysis of the water dimer.

    PubMed

    Azar, R Julian; Horn, Paul Richard; Sundstrom, Eric Jon; Head-Gordon, Martin

    2013-02-28

    The problem of describing the energy-lowering associated with polarization of interacting molecules is considered in the overlapping regime for self-consistent field wavefunctions. The existing approach of solving for absolutely localized molecular orbital (ALMO) coefficients that are block-diagonal in the fragments is shown based on formal grounds and practical calculations to often overestimate the strength of polarization effects. A new approach using a minimal basis of polarized orthogonal local MOs (polMOs) is developed as an alternative. The polMO basis is minimal in the sense that one polarization function is provided for each unpolarized orbital that is occupied; such an approach is exact in second-order perturbation theory. Based on formal grounds and practical calculations, the polMO approach is shown to underestimate the strength of polarization effects. In contrast to the ALMO method, however, the polMO approach yields results that are very stable to improvements in the underlying AO basis expansion. Combining the ALMO and polMO approaches allows an estimate of the range of energy-lowering due to polarization. Extensive numerical calculations on the water dimer using a large range of basis sets with Hartree-Fock theory and a variety of different density functionals illustrate the key considerations. Results are also presented for the polarization-dominated Na(+)CH4 complex. Implications for energy decomposition analysis of intermolecular interactions are discussed.

  6. Approaches for Subgrid Parameterization: Does Scaling Help?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-04-01

    Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.

  7. Experimental investigation of the dynamics of a hybrid morphing wing: time resolved particle image velocimetry and force measures

    NASA Astrophysics Data System (ADS)

    Jodin, Gurvan; Scheller, Johannes; Rouchon, Jean-François; Braza, Marianna; Mit Collaboration; Imft Collaboration; Laplace Collaboration

    2016-11-01

    A quantitative characterization of the effects obtained by high frequency-low amplitude trailing edge actuation is performed. Particle image velocimetry, as well as pressure and aerodynamic force measurements, are carried out on an airfoil model. This hybrid morphing wing model is equipped with both trailing edge piezoelectric-actuators and camber control shape memory alloy actuators. It will be shown that this actuation allows for an effective manipulation of the wake turbulent structures. Frequency domain analysis and proper orthogonal decomposition show that proper actuating reduces the energy dissipation by favoring more coherent vortical structures. This modification in the airflow dynamics eventually allows for a tapering of the wake thickness compared to the baseline configuration. Hence, drag reductions relative to the non-actuated trailing edge configuration are observed. Massachusetts Institute of Technology.

  8. Motions, efforts and actuations in constrained dynamic systems: a multi-link open-chain example

    NASA Astrophysics Data System (ADS)

    Duke Perreira, N.

    1999-08-01

    The effort-motion method, which describes the dynamics of open- and closed-chain topologies of rigid bodies interconnected with revolute and prismatic pairs, is interpreted geometrically. Systems are identified for which the simultaneous control of forces and velocities is desirable, and a representative open-chain system is selected for use in the ensuing analysis. Gauge invariant transformations are used to recast the commonly used kinetic and kinematic equations into a dimensional gauge invariant form. Constraint elimination techniques based on singular value decompositions then recast the invariant equations into orthogonal and reciprocal sets of motion and effort equations written in state variable form. The ideal actuation is found that simultaneously achieves the obtainable portions of the desired constraining efforts and motions. The performance is then evaluated of using the actuation closest to the ideal actuation.

  9. Dual domain watermarking for authentication and compression of cultural heritage images.

    PubMed

    Zhao, Yang; Campisi, Patrizio; Kundur, Deepa

    2004-03-01

    This paper proposes an approach for the combined image authentication and compression of color images by making use of a digital watermarking and data hiding framework. The digital watermark is comprised of two components: a soft-authenticator watermark for authentication and tamper assessment of the given image, and a chrominance watermark employed to improve the efficiency of compression. The multipurpose watermark is designed by exploiting the orthogonality of various domains used for authentication, color decomposition and watermark insertion. The approach is implemented as a DCT-DWT dual domain algorithm and is applied for the protection and compression of cultural heritage imagery. Analysis is provided to characterize the behavior of the scheme under ideal conditions. Simulations and comparisons of the proposed approach with state-of-the-art existing work demonstrate the potential of the overall scheme.

  10. Micropolar continuum modelling of bi-dimensional tetrachiral lattices

    PubMed Central

    Chen, Y.; Liu, X. N.; Hu, G. K.; Sun, Q. P.; Zheng, Q. S.

    2014-01-01

    The in-plane behaviour of tetrachiral lattices should be characterized by bi-dimensional orthotropic material owing to the existence of two orthogonal axes of rotational symmetry. Moreover, the constitutive model must also represent the chirality inherent in the lattices. To this end, a bi-dimensional orthotropic chiral micropolar model is developed based on the theory of irreducible orthogonal tensor decomposition. The obtained constitutive tensors display a hierarchy structure depending on the symmetry of the underlying microstructure. Eight additional material constants, in addition to five for the hemitropic case, are introduced to characterize the anisotropy under Z2 invariance. The developed continuum model is then applied to a tetrachiral lattice, and the material constants of the continuum model are analytically derived by a homogenization process. By comparing with numerical simulations for the discrete lattice, it is found that the proposed continuum model can correctly characterize the static and wave properties of the tetrachiral lattice. PMID:24808754

  11. Surface treatment process of Al-Mg alloy powder by BTSPS

    NASA Astrophysics Data System (ADS)

    Zhao, Ran; Gao, Xinbao; Lu, Yanling; Du, Fengzhen; Zhang, Li; Liu, Dazhi; Chen, Xuefang

    2018-04-01

    The surface of Al-Mg alloy powder was treated by BTSPS(bis(triethoxysilylpropyl)tetrasulfide) in order to avoid easy oxidation in air. The pH value, reaction temperature, reaction time, and reaction concentration were used as test conditions. The results show that the BTSPS can form a protected film on the surface of Al-Mg alloy powder. Select the best test solution by orthogonal test. The study found that the reaction time and reaction temperature have the biggest influence on the two indexes of the orthogonal test (melting enthalpy of heat and enthalpy of oxidation). The optimal conditions were as follows: pH value is 8, reaction concentration is 2%, reaction temperature is 25 °C, reaction time is 2 h. The oxidation weight gain of the alloy reached 74.45% and the decomposition temperature of silane film is 181.8 °C.

  12. The Zernike expansion--an example of a merit function for 2D/3D registration based on orthogonal functions.

    PubMed

    Dong, Shuo; Kettenbach, Joachim; Hinterleitner, Isabella; Bergmann, Helmar; Birkfellner, Wolfgang

    2008-01-01

    Current merit functions for 2D/3D registration usually rely on comparing pixels or small regions of images using some sort of statistical measure. Problems connected to this paradigm the sometimes problematic behaviour of the method if noise or artefacts (for instance a guide wire) are present on the projective image. We present a merit function for 2D/3D registration which utilizes the decomposition of the X-ray and the DRR under comparison into orthogonal Zernike moments; the quality of the match is assessed by an iterative comparison of expansion coefficients. Results in a imaging study on a physical phantom show that--compared to standard cross--correlation the Zernike moment based merit function shows better robustness if histogram content in images under comparison is different, and that time expenses are comparable if the merit function is constructed out of a few significant moments only.

  13. Multiscale techniques for parabolic equations.

    PubMed

    Målqvist, Axel; Persson, Anna

    2018-01-01

    We use the local orthogonal decomposition technique introduced in Målqvist and Peterseim (Math Comput 83(290):2583-2603, 2014) to derive a generalized finite element method for linear and semilinear parabolic equations with spatial multiscale coefficients. We consider nonsmooth initial data and a backward Euler scheme for the temporal discretization. Optimal order convergence rate, depending only on the contrast, but not on the variations of the coefficients, is proven in the [Formula: see text]-norm. We present numerical examples, which confirm our theoretical findings.

  14. A stabilized MFE reduced-order extrapolation model based on POD for the 2D unsteady conduction-convection problem.

    PubMed

    Xia, Hong; Luo, Zhendong

    2017-01-01

    In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.

  15. Errors from approximation of ODE systems with reduced order models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevska, Tanya

    2016-12-30

    This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.

  16. A theoretical formulation of wave-vortex interactions

    NASA Technical Reports Server (NTRS)

    Wu, J. Z.; Wu, J. M.

    1989-01-01

    A unified theoretical formulation for wave-vortex interaction, designated the '(omega, Pi) framework,' is presented. Based on the orthogonal decomposition of fluid dynamic interactions, the formulation can be used to study a variety of problems, including the interaction of a longitudinal (acoustic) wave and/or transverse (vortical) wave with a main vortex flow. Moreover, the formulation permits a unified treatment of wave-vortex interaction at various approximate levels, where the normal 'piston' process and tangential 'rubbing' process can be approximated dfferently.

  17. Definition of a parametric form of nonsingular Mueller matrices.

    PubMed

    Devlaminck, Vincent; Terrier, Patrick

    2008-11-01

    The goal of this paper is to propose a mathematical framework to define and analyze a general parametric form of an arbitrary nonsingular Mueller matrix. Starting from previous results about nondepolarizing matrices, we generalize the method to any nonsingular Mueller matrix. We address this problem in a six-dimensional space in order to introduce a transformation group with the same number of degrees of freedom and explain why subsets of O(5,1), the orthogonal group associated with six-dimensional Minkowski space, is a physically admissible solution to this question. Generators of this group are used to define possible expressions of an arbitrary nonsingular Mueller matrix. Ultimately, the problem of decomposition of these matrices is addressed, and we point out that the "reverse" and "forward" decomposition concepts recently introduced may be inferred from the formalism we propose.

  18. Fast algorithm of adaptive Fourier series

    NASA Astrophysics Data System (ADS)

    Gao, You; Ku, Min; Qian, Tao

    2018-05-01

    Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.

  19. Polar decomposition for attitude determination from vector observations

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1993-01-01

    This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.

  20. Towards reduced order modelling for predicting the dynamics of coherent vorticity structures within wind turbine wakes.

    PubMed

    Debnath, M; Santoni, C; Leonardi, S; Iungo, G V

    2017-04-13

    The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  1. QRS analysis using wavelet transformation for the prediction of response to cardiac resynchronization therapy: a prospective pilot study.

    PubMed

    Vassilikos, Vassilios P; Mantziari, Lilian; Dakos, Georgios; Kamperidis, Vasileios; Chouvarda, Ioanna; Chatzizisis, Yiannis S; Kalpidis, Panagiotis; Theofilogiannakos, Efstratios; Paraskevaidis, Stelios; Karvounis, Haralambos; Mochlas, Sotirios; Maglaveras, Nikolaos; Styliadis, Ioannis H

    2014-01-01

    Wider QRS and left bundle branch block morphology are related to response to cardiac resynchronization therapy (CRT). A novel time-frequency analysis of the QRS complex may provide additional information in predicting response to CRT. Signal-averaged electrocardiograms were prospectively recorded, before CRT, in orthogonal leads and QRS decomposition in three frequency bands was performed using the Morlet wavelet transformation. Thirty eight patients (age 65±10years, 31 males) were studied. CRT responders (n=28) had wider baseline QRS compared to non-responders and lower QRS energies in all frequency bands. The combination of QRS duration and mean energy in the high frequency band had the best predicting ability (AUC 0.833, 95%CI 0.705-0.962, p=0.002) followed by the maximum energy in the high frequency band (AUC 0.811, 95%CI 0.663-0.960, p=0.004). Wavelet transformation of the QRS complex is useful in predicting response to CRT. © 2013.

  2. Structure analysis of turbulent liquid phase by POD and LSE techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munir, S., E-mail: shahzad-munir@comsats.edu.pk; Muthuvalu, M. S.; Siddiqui, M. I.

    2014-10-24

    In this paper, vortical structures and turbulence characteristics of liquid phase in both single liquid phase and two-phase slug flow in pipes were studied. Two dimensional velocity vector fields of liquid phase were obtained by Particle image velocimetry (PIV). Two cases were considered one single phase liquid flow at 80 l/m and second slug flow by introducing gas at 60 l/m while keeping liquid flow rate same. Proper orthogonal decomposition (POD) and Linear stochastic estimation techniques were used for the extraction of coherent structures and analysis of turbulence in liquid phase for both cases. POD has successfully revealed large energymore » containing structures. The time dependent POD spatial mode coefficients oscillate with high frequency for high mode numbers. The energy distribution of spatial modes was also achieved. LSE has pointed out the coherent structured for both cases and the reconstructed velocity fields are in well agreement with the instantaneous velocity fields.« less

  3. Discrete wavelet transform: a tool in smoothing kinematic data.

    PubMed

    Ismail, A R; Asfour, S S

    1999-03-01

    Motion analysis systems typically introduce noise to the displacement data recorded. Butterworth digital filters have been used to smooth the displacement data in order to obtain smoothed velocities and accelerations. However, this technique does not yield satisfactory results, especially when dealing with complex kinematic motions that occupy the low- and high-frequency bands. The use of the discrete wavelet transform, as an alternative to digital filters, is presented in this paper. The transform passes the original signal through two complementary low- and high-pass FIR filters and decomposes the signal into an approximation function and a detail function. Further decomposition of the signal results in transforming the signal into a hierarchy set of orthogonal approximation and detail functions. A reverse process is employed to perfectly reconstruct the signal (inverse transform) back from its approximation and detail functions. The discrete wavelet transform was applied to the displacement data recorded by Pezzack et al., 1977. The smoothed displacement data were twice differentiated and compared to Pezzack et al.'s acceleration data in order to choose the most appropriate filter coefficients and decomposition level on the basis of maximizing the percentage of retained energy (PRE) and minimizing the root mean square error (RMSE). Daubechies wavelet of the fourth order (Db4) at the second decomposition level showed better results than both the biorthogonal and Coiflet wavelets (PRE = 97.5%, RMSE = 4.7 rad s-2). The Db4 wavelet was then used to compress complex displacement data obtained from a noisy mathematically generated function. Results clearly indicate superiority of this new smoothing approach over traditional filters.

  4. Surrogate models for sheet metal stamping problem based on the combination of proper orthogonal decomposition and radial basis function

    NASA Astrophysics Data System (ADS)

    Dang, Van Tuan; Lafon, Pascal; Labergere, Carl

    2017-10-01

    In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.

  5. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    PubMed

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Nonlinear Reduced-Order Analysis with Time-Varying Spatial Loading Distributions

    NASA Technical Reports Server (NTRS)

    Prezekop, Adam

    2008-01-01

    Oscillating shocks acting in combination with high-intensity acoustic loadings present a challenge to the design of resilient hypersonic flight vehicle structures. This paper addresses some features of this loading condition and certain aspects of a nonlinear reduced-order analysis with emphasis on system identification leading to formation of a robust modal basis. The nonlinear dynamic response of a composite structure subject to the simultaneous action of locally strong oscillating pressure gradients and high-intensity acoustic loadings is considered. The reduced-order analysis used in this work has been previously demonstrated to be both computationally efficient and accurate for time-invariant spatial loading distributions, provided that an appropriate modal basis is used. The challenge of the present study is to identify a suitable basis for loadings with time-varying spatial distributions. Using a proper orthogonal decomposition and modal expansion, it is shown that such a basis can be developed. The basis is made more robust by incrementally expanding it to account for changes in the location, frequency and span of the oscillating pressure gradient.

  7. Reduced dynamical model of the vibrations of a metal plate

    NASA Astrophysics Data System (ADS)

    Moreno, D.; Barrientos, Bernardino; Perez-Lopez, Carlos; Mendoza-Santoyo, Fernando; Guerrero, J. A.; Funes, M.

    2005-02-01

    The Proper Orthogonal Decomposition (POD) method is applied to the vibrations analysis of a metal plate. The data obtained from the metal plate under vibrations were measured with a laser vibrometer. The metal plate was subject to vibrations with an electrodynamical shaker in a range of frequencies from 100 to 5000 Hz. The deformation measurements were taken on a quarter of the plate in a rectangular grid of 7 x 8 points. The plate deformation measurements were used to calculate the eigenfunctions and the eigenvalues. It was found that a large fraction of the total energy of the deformation is contained within the first six POD modes. The essential features of the deformation are thus described by only the six first eigenfunctions. A reduced order model for the dynamical behavior is then constructed using Galerkin projection of the equation of motion for the vertical displacement of a plate.

  8. Critical Evaluation of Kinetic Method Measurements: Possible Origins of Nonlinear Effects

    NASA Astrophysics Data System (ADS)

    Bourgoin-Voillard, Sandrine; Afonso, Carlos; Lesage, Denis; Zins, Emilie-Laure; Tabet, Jean-Claude; Armentrout, P. B.

    2013-03-01

    The kinetic method is a widely used approach for the determination of thermochemical data such as proton affinities (PA) and gas-phase acidities ( ΔH° acid ). These data are easily obtained from decompositions of noncovalent heterodimers if care is taken in the choice of the method, references used, and experimental conditions. Previously, several papers have focused on theoretical considerations concerning the nature of the references. Few investigations have been devoted to conditions required to validate the quality of the experimental results. In the present work, we are interested in rationalizing the origin of nonlinear effects that can be obtained with the kinetic method. It is shown that such deviations result from intrinsic properties of the systems investigated but can also be enhanced by artifacts resulting from experimental issues. Overall, it is shown that orthogonal distance regression (ODR) analysis of kinetic method data provides the optimum way of acquiring accurate thermodynamic information.

  9. Unsteady flow sensing and optimal sensor placement using machine learning

    NASA Astrophysics Data System (ADS)

    Semaan, Richard

    2016-11-01

    Machine learning is used to estimate the flow state and to determine the optimal sensor placement over a two-dimensional (2D) airfoil equipped with a Coanda actuator. The analysis is based on flow field data obtained from 2D unsteady Reynolds averaged Navier-Stokes (uRANS) simulations with different jet blowing intensities and actuation frequencies, characterizing different flow separation states. This study shows how the "random forests" algorithm is utilized beyond its typical usage in fluid mechanics estimating the flow state to determine the optimal sensor placement. The results are compared against the current de-facto standard of maximum modal amplitude location and against a brute force approach that scans all possible sensor combinations. The results show that it is possible to simultaneously infer the state of flow and to determine the optimal sensor location without the need to perform proper orthogonal decomposition. Collaborative Research Center (CRC) 880, DFG.

  10. Low-frequency dynamics of pressure-induced turbulent separation bubbles

    NASA Astrophysics Data System (ADS)

    Weiss, Julien; Mohammed-Taifour, Abdelouahab; Lefloch, Arnaud

    2017-11-01

    We experimentally investigate a pressure-induced turbulent separation bubble (TSB), which is generated on a flat test surface through a combination of adverse and favorable pressure gradients imposed on a nominally two-dimensional, incompressible, turbulent boundary layer. We probe the flow using piezo-resistive pressure transducers, MEMS shear-stress sensors, and high-speed, 2D-2C, PIV measurements. Through the use of Fourier analysis of the wall-pressure fluctuations and Proper Orthogonal Decomposition of the velocity fields, we show that this type of flow is characterized by a self-induced, low-frequency contraction and expansion - called breathing - of the TSB. The dominant Strouhal number of this motion, based on the TSB length and the incoming velocity in the potential flow, is of the order of 0.01. We compare this motion to the low-frequency dynamics observed in laminar separation bubbles (LSBs), geometry-induced TSBs, and shock-induced separated flows.

  11. Simulation of wind turbine wakes using the actuator line technique

    PubMed Central

    Sørensen, Jens N.; Mikkelsen, Robert F.; Henningson, Dan S.; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J.

    2015-01-01

    The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. PMID:25583862

  12. Current variability and momentum balance in the along-shore flow for the Catalan inner-shelf.

    NASA Astrophysics Data System (ADS)

    Grifoll, M.; Aretxabaleta, A.; Espino, M.; Warner, J. C.

    2012-04-01

    This contribution examines the circulation of the inner-shelf of the Catalan Sea from an observational perspective. Measurements were obtained from a set of ADCPs deployed during March and April 2011 at 25 and 50 meters depth. Analysis reveals a strongly polarized low-frequency flow following the isobaths predominantly in the south-westward direction. The current variance is mostly explained by the two principal modes of an empirical orthogonal decomposition. The first mode represents almost 80% of the variability. Correlation values of 0.4 to 0.7 have been found between the depth-averaged along-shelf flow and the local wind and the Adjusted Sea-level Slope. The momentum balance in the along-shore direction reveals strong frictional effects and an influence of the barotropic pressure gradients. This research provides a physical framework for ongoing numerical modelling activities and climatological studies in the Catalan inner-shelf.

  13. Wall Shear Stress Distribution in a Patient-Specific Cerebral Aneurysm Model using Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya

    2016-11-01

    We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.

  14. Data-driven sensor placement from coherent fluid structures

    NASA Astrophysics Data System (ADS)

    Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.

  15. Truncated feature representation for automatic target detection using transformed data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-05-01

    In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.

  16. Lumley decomposition of turbulent boundary layer at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Tutkun, Murat; George, William K.

    2017-02-01

    The decomposition proposed by Lumley in 1966 is applied to a high Reynolds number turbulent boundary layer. The experimental database was created by a hot-wire rake of 143 probes in the Laboratoire de Mécanique de Lille wind tunnel. The Reynolds numbers based on momentum thickness (Reθ) are 9800 and 19 100. Three-dimensional decomposition is performed, namely, proper orthogonal decomposition (POD) in the inhomogeneous and bounded wall-normal direction, Fourier decomposition in the homogeneous spanwise direction, and Fourier decomposition in time. The first POD modes in both cases carry nearly 50% of turbulence kinetic energy when the energy is integrated over Fourier dimensions. The eigenspectra always peak near zero frequency and most of the large scale, energy carrying features are found at the low end of the spectra. The spanwise Fourier mode which has the largest amount of energy is the first spanwise mode and its symmetrical pair. Pre-multiplied eigenspectra have only one distinct peak and it matches the secondary peak observed in the log-layer of pre-multiplied velocity spectra. Energy carrying modes obtained from the POD scale with outer scaling parameters. Full or partial reconstruction of turbulent velocity signal based only on energetic modes or non-energetic modes revealed the behaviour of urms in distinct regions across the boundary layer. When urms is based on energetic reconstruction, there exists (a) an exponential decay from near wall to log-layer, (b) a constant layer through the log-layer, and (c) another exponential decay in the outer region. The non-energetic reconstruction reveals that urms has (a) an exponential decay from the near-wall to the end of log-layer and (b) a constant layer in the outer region. Scaling of urms using the outer parameters is best when both energetic and non-energetic profiles are combined.

  17. Group theoretical methods and wavelet theory: coorbit theory and applications

    NASA Astrophysics Data System (ADS)

    Feichtinger, Hans G.

    2013-05-01

    Before the invention of orthogonal wavelet systems by Yves Meyer1 in 1986 Gabor expansions (viewed as discretized inversion of the Short-Time Fourier Transform2 using the overlap and add OLA) and (what is now perceived as) wavelet expansions have been treated more or less at an equal footing. The famous paper on painless expansions by Daubechies, Grossman and Meyer3 is a good example for this situation. The description of atomic decompositions for functions in modulation spaces4 (including the classical Sobolev spaces) given by the author5 was directly modeled according to the corresponding atomic characterizations by Frazier and Jawerth,6, 7 more or less with the idea of replacing the dyadic partitions of unity of the Fourier transform side by uniform partitions of unity (so-called BUPU's, first named as such in the early work on Wiener-type spaces by the author in 19808). Watching the literature in the subsequent two decades one can observe that the interest in wavelets "took over", because it became possible to construct orthonormal wavelet systems with compact support and of any given degree of smoothness,9 while in contrast the Balian-Low theorem is prohibiting the existence of corresponding Gabor orthonormal bases, even in the multi-dimensional case and for general symplectic lattices.10 It is an interesting historical fact that* his construction of band-limited orthonormal wavelets (the Meyer wavelet, see11) grew out of an attempt to prove the impossibility of the existence of such systems, and the final insight was that it was not impossible to have such systems, and in fact quite a variety of orthonormal wavelet system can be constructed as we know by now. Meanwhile it is established wisdom that wavelet theory and time-frequency analysis are two different ways of decomposing signals in orthogonal resp. non-orthogonal ways. The unifying theory, covering both cases, distilling from these two situations the common group theoretical background lead to the theory of coorbit spaces,12, 13 established by the author jointly with K. Gröchenig. Starting from an integrable and irreducible representation of some locally compact group (such as the "ax+b"-group or the Heisenberg group) one can derive families of Banach spaces having natural atomic characterizations, or alternatively a continuous transform associated to it. So at the end function spaces of locally compact groups come into play, and their generic properties help to explain why and how it is possible to obtain (nonorthogonal) decompositions. While unification of these two groups was one important aspect of the approach given in the late 80th, it was also clear that this approach allows to formulate and exploit the analogy to Banach spaces of analytic functions invariant under the Moebius group have been at the heart in this context. Recent years have seen further new instances and generalizations. Among them shearlets or the Blaschke product should be mentioned here, and the increased interest in the connections between wavelet theory and complex analysis. The talk will try to summarize a few of the general principles which can be derived from the general theory, but also highlight the difference between the different groups and signal expansions arising from corresponding group representations. There is still a lot more to be done, also from the point of view of applications and the numerical realization of such non-orthogonal expansions.

  18. The Characteristics of Turbulence in Curved Pipes under Highly Pulsatile Flow Conditions

    NASA Astrophysics Data System (ADS)

    Kalpakli, A.; Örlü, R.; Tillmark, N.; Alfredsson, P. Henrik

    High speed stereoscopic particle image velocimetry has been employed to provide unique data from a steady and highly pulsatile turbulent flow at the exit of a 90 degree pipe bend. Both the unsteady behaviour of the Dean cells under steady conditions, the so called "swirl switching" phenomenon, as well as the secondary flow under pulsations have been reconstructed through proper orthogonal decomposition. The present data set constitutes - to the authors' knowledge - the first detailed investigation of a turbulent, pulsatile flow through a pipe bend.

  19. Galerkin Method for Nonlinear Dynamics

    NASA Astrophysics Data System (ADS)

    Noack, Bernd R.; Schlegel, Michael; Morzynski, Marek; Tadmor, Gilead

    A Galerkin method is presented for control-oriented reduced-order models (ROM). This method generalizes linear approaches elaborated by M. Morzyński et al. for the nonlinear Navier-Stokes equation. These ROM are used as plants for control design in the chapters by G. Tadmor et al., S. Siegel, and R. King in this volume. Focus is placed on empirical ROM which compress flow data in the proper orthogonal decomposition (POD). The chapter shall provide a complete description for construction of straight-forward ROM as well as the physical understanding and teste

  20. Numerical Schemes and Computational Studies for Dynamically Orthogonal Equations (Multidisciplinary Simulation, Estimation, and Assimilation Systems: Reports in Ocean Science and Engineering)

    DTIC Science & Technology

    2011-08-01

    heat transfers [49, 52]. However, the DO method has not yet been applied to Boussinesq flows, and the numerical challenges of the DO decomposition for...used a PCE scheme to study mixing in a two-dimensional (2D) microchannel and improved the efficiency of their solution scheme by decoupling the...to several Navier-Stokes flows and their stochastic dynamics has been studied, including mean-mode and mode-mode energy transfers for 2D flows and

  1. A Perturbation Based Decomposition of Compound-Evoked Potentials for Characterization of Nerve Fiber Size Distributions.

    PubMed

    Szlavik, Robert B

    2016-02-01

    The characterization of peripheral nerve fiber distributions, in terms of diameter or velocity, is of clinical significance because information associated with these distributions can be utilized in the differential diagnosis of peripheral neuropathies. Electro-diagnostic techniques can be applied to the investigation of peripheral neuropathies and can yield valuable diagnostic information while being minimally invasive. Nerve conduction velocity studies are single parameter tests that yield no detailed information regarding the characteristics of the population of nerve fibers that contribute to the compound-evoked potential. Decomposition of the compound-evoked potential, such that the velocity or diameter distribution of the contributing nerve fibers may be determined, is necessary if information regarding the population of contributing nerve fibers is to be ascertained from the electro-diagnostic study. In this work, a perturbation-based decomposition of compound-evoked potentials is proposed that facilitates determination of the fiber diameter distribution associated with the compound-evoked potential. The decomposition is based on representing the single fiber-evoked potential, associated with each diameter class, as being perturbed by contributions, of varying degree, from all the other diameter class single fiber-evoked potentials. The resultant estimator of the contributing nerve fiber diameter distribution is valid for relatively large separations in diameter classes. It is also useful in situations where the separation between diameter classes is small and the concomitant single fiber-evoked potentials are not orthogonal.

  2. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5

  3. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.

  4. The Computation of Orthogonal Independent Cluster Solutions and Their Oblique Analogs in Factor Analysis.

    ERIC Educational Resources Information Center

    Hofmann, Richard J.

    A very general model for the computation of independent cluster solutions in factor analysis is presented. The model is discussed as being either orthogonal or oblique. Furthermore, it is demonstrated that for every orthogonal independent cluster solution there is an oblique analog. Using three illustrative examples, certain generalities are made…

  5. Multiscale characterization and prediction of monsoon rainfall in India using Hilbert-Huang transform and time-dependent intrinsic correlation analysis

    NASA Astrophysics Data System (ADS)

    Adarsh, S.; Reddy, M. Janga

    2017-07-01

    In this paper, the Hilbert-Huang transform (HHT) approach is used for the multiscale characterization of All India Summer Monsoon Rainfall (AISMR) time series and monsoon rainfall time series from five homogeneous regions in India. The study employs the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) for multiscale decomposition of monsoon rainfall in India and uses the Normalized Hilbert Transform and Direct Quadrature (NHT-DQ) scheme for the time-frequency characterization. The cross-correlation analysis between orthogonal modes of All India monthly monsoon rainfall time series and that of five climate indices such as Quasi Biennial Oscillation (QBO), El Niño Southern Oscillation (ENSO), Sunspot Number (SN), Atlantic Multi Decadal Oscillation (AMO), and Equatorial Indian Ocean Oscillation (EQUINOO) in the time domain showed that the links of different climate indices with monsoon rainfall are expressed well only for few low-frequency modes and for the trend component. Furthermore, this paper investigated the hydro-climatic teleconnection of ISMR in multiple time scales using the HHT-based running correlation analysis technique called time-dependent intrinsic correlation (TDIC). The results showed that both the strength and nature of association between different climate indices and ISMR vary with time scale. Stemming from this finding, a methodology employing Multivariate extension of EMD and Stepwise Linear Regression (MEMD-SLR) is proposed for prediction of monsoon rainfall in India. The proposed MEMD-SLR method clearly exhibited superior performance over the IMD operational forecast, M5 Model Tree (MT), and multiple linear regression methods in ISMR predictions and displayed excellent predictive skill during 1989-2012 including the four extreme events that have occurred during this period.

  6. Flexible Launch Vehicle Stability Analysis Using Steady and Unsteady Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    Launch vehicles frequently experience a reduced stability margin through the transonic Mach number range. This reduced stability margin can be caused by the aerodynamic undamping one of the lower-frequency flexible or rigid body modes. Analysis of the behavior of a flexible vehicle is routinely performed with quasi-steady aerodynamic line loads derived from steady rigid aerodynamics. However, a quasi-steady aeroelastic stability analysis can be unconservative at the critical Mach numbers, where experiment or unsteady computational aeroelastic analysis show a reduced or even negative aerodynamic damping.Amethod of enhancing the quasi-steady aeroelastic stability analysis of a launch vehicle with unsteady aerodynamics is developed that uses unsteady computational fluid dynamics to compute the response of selected lower-frequency modes. The response is contained in a time history of the vehicle line loads. A proper orthogonal decomposition of the unsteady aerodynamic line-load response is used to reduce the scale of data volume and system identification is used to derive the aerodynamic stiffness, damping, and mass matrices. The results are compared with the damping and frequency computed from unsteady computational aeroelasticity and from a quasi-steady analysis. The results show that incorporating unsteady aerodynamics in this way brings the enhanced quasi-steady aeroelastic stability analysis into close agreement with the unsteady computational aeroelastic results.

  7. Deconvolution of reacting-flow dynamics using proper orthogonal and dynamic mode decompositions

    NASA Astrophysics Data System (ADS)

    Roy, Sukesh; Hua, Jia-Chen; Barnhill, Will; Gunaratne, Gemunu H.; Gord, James R.

    2015-01-01

    Analytical and computational studies of reacting flows are extremely challenging due in part to nonlinearities of the underlying system of equations and long-range coupling mediated by heat and pressure fluctuations. However, many dynamical features of the flow can be inferred through low-order models if the flow constituents (e.g., eddies or vortices) and their symmetries, as well as the interactions among constituents, are established. Modal decompositions of high-frequency, high-resolution imaging, such as measurements of species-concentration fields through planar laser-induced florescence and of velocity fields through particle-image velocimetry, are the first step in the process. A methodology is introduced for deducing the flow constituents and their dynamics following modal decomposition. Proper orthogonal (POD) and dynamic mode (DMD) decompositions of two classes of problems are performed and their strengths compared. The first problem involves a cellular state generated in a flat circular flame front through symmetry breaking. The state contains two rings of cells that rotate clockwise at different rates. Both POD and DMD can be used to deconvolve the state into the two rings. In POD the contribution of each mode to the flow is quantified using the energy. Each DMD mode can be associated with an energy as well as a unique complex growth rate. Dynamic modes with the same spatial symmetry but different growth rates are found to be combined into a single POD mode. Thus, a flow can be approximated by a smaller number of POD modes. On the other hand, DMD provides a more detailed resolution of the dynamics. Two classes of reacting flows behind symmetric bluff bodies are also analyzed. In the first, symmetric pairs of vortices are released periodically from the two ends of the bluff body. The second flow contains von Karman vortices also, with a vortex being shed from one end of the bluff body followed by a second shedding from the opposite end. The way in which DMD can be used to deconvolve the second flow into symmetric and von Karman vortices is demonstrated. The analyses performed illustrate two distinct advantages of DMD: (1) Unlike proper orthogonal modes, each dynamic mode is associated with a unique complex growth rate. By comparing DMD spectra from multiple nominally identical experiments, it is possible to identify "reproducible" modes in a flow. We also find that although most high-energy modes are reproducible, some are not common between experimental realizations; in the examples considered, energy fails to differentiate between reproducible and nonreproducible modes. Consequently, it may not be possible to differentiate reproducible and nonreproducible modes in POD. (2) Time-dependent coefficients of dynamic modes are complex. Even in noisy experimental data, the dynamics of the phase of these coefficients (but not their magnitude) are highly regular. The phase represents the angular position of a rotating ring of cells and quantifies the downstream displacement of vortices in reacting flows. Thus, it is suggested that the dynamical characterizations of complex flows are best made through the phase dynamics of reproducible DMD modes.

  8. Characterization of large-scale fluctuations and short-term variability of Seine river daily streamflow (France) over the period 1950-2008 by empirical mode decomposition and the Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Massei, N.; Fournier, M.

    2010-12-01

    Daily Seine river flow from 1950 to 2008 was analyzed using Hilbert-Huang Tranform (HHT). For the last ten years, this method which combines the so-called Empirical Mode Decomposition (EMD) multiresolution analysis and the Hilbert transform has proven its efficiency for the analysis of transient oscillatory signals, although the mathematical definition of the EMD is not totally established yet. HHT also provides an interesting alternative to other time-frequency or time-scale analysis of non-stationary signals, the most famous of which being wavelet-based approaches. In this application of HHT to the analysis of the hydrological variability of the Seine river, we seek to characterize the interannual patterns of daily flow, differenciate them from the short-term dynamics and eventually interpret them in the context of regional climate regime fluctuations. In this aim, HHT is also applied to the North-Atlantic Oscillation (NAO) through the annual winter-months NAO index time series. For both hydrological and climatic signals, dominant variability scales are extracted and their temporal variations analyzed by determination of the intantaneous frequency of each component. When compared to previous ones obtained from continuous wavelet transform (CWT) on the same data, HHT results highlighted the same scales and somewhat the same internal components for each signal. However, HHT allowed the identification and extraction of much more similar features during the 1950-2008 period (e.g., around 7-yr, between NAO and Seine flow than what was obtained from CWT, which comes to say that variability scales in flow likely to originate from climatic regime fluctuations were much properly identified in river flow. In addition, a more accurate determination of singularities in the natural processes analyzed were authorized by HHT compared to CWT, in which case the time-frequency resolution partly depends on the basic properties of the filter (i.e., the reference wavelet chosen initially). Compared to CWT or even to discrete wavelet multiresolution analysis, HHT is auto-adaptive, non-parametric, allows an orthogonal decomposition of the signal analyzed and provides a more accurate estimation of changing variability scales across time for highly transient signals.

  9. Normal forms for reduced stochastic climate models

    PubMed Central

    Majda, Andrew J.; Franzke, Christian; Crommelin, Daan

    2009-01-01

    The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943

  10. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.

  11. Regularization of Mickelsson generators for nonexceptional quantum groups

    NASA Astrophysics Data System (ADS)

    Mudrov, A. I.

    2017-08-01

    Let g' ⊂ g be a pair of Lie algebras of either symplectic or orthogonal infinitesimal endomorphisms of the complex vector spaces C N-2 ⊂ C N and U q (g') ⊂ U q (g) be a pair of quantum groups with a triangular decomposition U q (g) = U q (g-) U q (g+) U q (h). Let Z q (g, g') be the corresponding step algebra. We assume that its generators are rational trigonometric functions h ∗ → U q (g±). We describe their regularization such that the resulting generators do not vanish for any choice of the weight.

  12. Implementing the sine transform of fermionic modes as a tensor network

    NASA Astrophysics Data System (ADS)

    Epple, Hannes; Fries, Pascal; Hinrichsen, Haye

    2017-09-01

    Based on the algebraic theory of signal processing, we recursively decompose the discrete sine transform of the first kind (DST-I) into small orthogonal block operations. Using a diagrammatic language, we then second-quantize this decomposition to construct a tensor network implementing the DST-I for fermionic modes on a lattice. The complexity of the resulting network is shown to scale as 5/4 n logn (not considering swap gates), where n is the number of lattice sites. Our method provides a systematic approach of generalizing Ferris' spectral tensor network for nontrivial boundary conditions.

  13. Wavelet transforms with discrete-time continuous-dilation wavelets

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Rao, Raghuveer M.

    1999-03-01

    Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.

  14. Studies in turbulence

    NASA Technical Reports Server (NTRS)

    Gatski, Thomas B. (Editor); Sarkar, Sutanu (Editor); Speziale, Charles G. (Editor)

    1992-01-01

    Various papers on turbulence are presented. Individual topics addressed include: modeling the dissipation rate in rotating turbulent flows, mapping closures for turbulent mixing and reaction, understanding turbulence in vortex dynamics, models for the structure and dynamics of near-wall turbulence, complexity of turbulence near a wall, proper orthogonal decomposition, propagating structures in wall-bounded turbulence flows. Also discussed are: constitutive relation in compressible turbulence, compressible turbulence and shock waves, direct simulation of compressible turbulence in a shear flow, structural genesis in wall-bounded turbulence flows, vortex lattice structure of turbulent shear slows, etiology of shear layer vortices, trilinear coordinates in fluid mechanics.

  15. Spatial-temporal and modal analysis of propeller induced ground vortices by particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Sciacchitano, A.; Veldhuis, L. L. M.; Eitelberg, G.

    2016-10-01

    During the ground operation of aircraft, there is potentially a system of vortices generated from the ground toward the propulsor, commonly denoted as ground vortices. Although extensive research has been conducted on ground vortices induced by turbofans which were simplified by suction tubes, these studies cannot well capture the properties of ground vortices induced by propellers, e.g., the flow phenomena due to intermittent characteristics of blade passing and the presence of slipstream of the propeller. Therefore, the investigation of ground vortices induced by a propeller is performed to improve understanding of these phenomena. The distributions of velocities in two different planes containing the vortices were measured by high frequency Particle Image Velocimetry. These planes are a wall-parallel plane in close proximity to the ground and a wall-normal plane upstream of the propeller. The instantaneous flow fields feature highly unsteady flow in both of these two planes. The spectral analysis is conducted in these two flow fields and the energetic frequencies are quantified. The flow fields are further evaluated by applying the Proper Orthogonal Decomposition analysis to capture the coherent flow structures. Consistent flow structures with strong contributions to the turbulent kinetic energy are noticed in the two planes.

  16. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  17. PHYSICS OF NON-GAUSSIAN FIELDS AND THE COSMOLOGICAL GENUS STATISTIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, J. Berian, E-mail: berian@berkeley.edu

    2012-05-20

    We report a technique to calculate the impact of distinct physical processes inducing non-Gaussianity on the cosmological density field. A natural decomposition of the cosmic genus statistic into an orthogonal polynomial sequence allows complete expression of the scale-dependent evolution of the topology of large-scale structure, in which effects including galaxy bias, nonlinear gravitational evolution, and primordial non-Gaussianity may be delineated. The relationship of this decomposition to previous methods for analyzing the genus statistic is briefly considered and the following applications are made: (1) the expression of certain systematics affecting topological measurements, (2) the quantification of broad deformations from Gaussianity thatmore » appear in the genus statistic as measured in the Horizon Run simulation, and (3) the study of the evolution of the genus curve for simulations with primordial non-Gaussianity. These advances improve the treatment of flux-limited galaxy catalogs for use with this measurement and further the use of the genus statistic as a tool for exploring non-Gaussianity.« less

  18. Low-dimensional and Data Fusion Techniques Applied to a Rectangular Supersonic Multi-stream Jet

    NASA Astrophysics Data System (ADS)

    Berry, Matthew; Stack, Cory; Magstadt, Andrew; Ali, Mohd; Gaitonde, Datta; Glauser, Mark

    2017-11-01

    Low-dimensional models of experimental and simulation data for a complex supersonic jet were fused to reconstruct time-dependent proper orthogonal decomposition (POD) coefficients. The jet consists of a multi-stream rectangular single expansion ramp nozzle, containing a core stream operating at Mj , 1 = 1.6 , and bypass stream at Mj , 3 = 1.0 with an underlying deck. POD was applied to schlieren and PIV data to acquire the spatial basis functions. These eigenfunctions were projected onto their corresponding time-dependent large eddy simulation (LES) fields to reconstruct the temporal POD coefficients. This reconstruction was able to resolve spectral peaks that were previously aliased due to the slower sampling rates of the experiments. Additionally, dynamic mode decomposition (DMD) was applied to the experimental and LES datasets, and the spatio-temporal characteristics were compared to POD. The authors would like to acknowledge AFOSR, program manager Dr. Doug Smith, for funding this research, Grant No. FA9550-15-1-0435.

  19. Improving performance of channel equalization in RSOA-based WDM-PON by QR decomposition.

    PubMed

    Li, Xiang; Zhong, Wen-De; Alphones, Arokiaswami; Yu, Changyuan; Xu, Zhaowen

    2015-10-19

    In reflective semiconductor optical amplifier (RSOA)-based wavelength division multiplexed passive optical network (WDM-PON), the bit rate is limited by low modulation bandwidth of RSOAs. To overcome the limitation, we apply QR decomposition in channel equalizer (QR-CE) to achieve successive interference cancellation (SIC) for discrete Fourier transform spreading orthogonal frequency division multiplexing (DFT-S OFDM) signal. Using an RSOA with a 3-dB modulation bandwidth of only ~800 MHz, we experimentally demonstrate a 15.5-Gb/s over 20-km SSMF DFT-S OFDM transmission with QR-CE. The experimental results show that DFTS-OFDM with QR-CE attains much better BER performance than DFTS-OFDM and OFDM with conventional channel equalizers. The impacts of several parameters on QR-CE are investigated. It is found that 2 sub-bands in one OFDM symbol and 1 pilot in each sub-band are sufficient to achieve optimal performance and maintain the high spectral efficiency.

  20. A Quasi-Steady Flexible Launch Vehicle Stability Analysis Using Steady CFD with Unsteady Aerodynamic Enhancement

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2011-01-01

    Launch vehicles frequently experience a reduced stability margin through the transonic Mach number range. This reduced stability margin is caused by an undamping of the aerodynamics in one of the lower frequency flexible or rigid body modes. Analysis of the behavior of a flexible vehicle is routinely performed with quasi-steady aerodynamic lineloads derived from steady rigid computational fluid dynamics (CFD). However, a quasi-steady aeroelastic stability analysis can be unconservative at the critical Mach numbers where experiment or unsteady computational aeroelastic (CAE) analysis show a reduced or even negative aerodynamic damping. This paper will present a method of enhancing the quasi-steady aeroelastic stability analysis of a launch vehicle with unsteady aerodynamics. The enhanced formulation uses unsteady CFD to compute the response of selected lower frequency modes. The response is contained in a time history of the vehicle lineloads. A proper orthogonal decomposition of the unsteady aerodynamic lineload response is used to reduce the scale of data volume and system identification is used to derive the aerodynamic stiffness, damping and mass matrices. The results of the enhanced quasi-static aeroelastic stability analysis are compared with the damping and frequency computed from unsteady CAE analysis and from a quasi-steady analysis. The results show that incorporating unsteady aerodynamics in this way brings the enhanced quasi-steady aeroelastic stability analysis into close agreement with the unsteady CAE analysis.

  1. Adaptive sparsest narrow-band decomposition method and its applications to rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao

    2017-02-01

    Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.

  2. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  3. Simulation of wind turbine wakes using the actuator line technique.

    PubMed

    Sørensen, Jens N; Mikkelsen, Robert F; Henningson, Dan S; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J

    2015-02-28

    The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  4. Recharge signal identification based on groundwater level observations.

    PubMed

    Yu, Hwa-Lung; Chu, Hone-Jay

    2012-10-01

    This study applied a method of the rotated empirical orthogonal functions to directly decompose the space-time groundwater level variations and determine the potential recharge zones by investigating the correlation between the identified groundwater signals and the observed local rainfall records. The approach is used to analyze the spatiotemporal process of piezometric heads estimated by Bayesian maximum entropy method from monthly observations of 45 wells in 1999-2007 located in the Pingtung Plain of Taiwan. From the results, the primary potential recharge area is located at the proximal fan areas where the recharge process accounts for 88% of the spatiotemporal variations of piezometric heads in the study area. The decomposition of groundwater levels associated with rainfall can provide information on the recharge process since rainfall is an important contributor to groundwater recharge in semi-arid regions. Correlation analysis shows that the identified recharge closely associates with the temporal variation of the local precipitation with a delay of 1-2 months in the study area.

  5. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    NASA Astrophysics Data System (ADS)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  6. An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems

    NASA Astrophysics Data System (ADS)

    Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.

    2016-04-01

    Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).

  7. Ocean Models and Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Salas-de-Leon, D. A.

    2007-05-01

    The increasing computational developments and the better understanding of mathematical and physical systems resulted in an increasing number of ocean models. Long time ago, modelers were like a secret organization and recognize each other by using secret codes and languages that only a select group of people was able to recognize and understand. The access to computational systems was reduced, on one hand equipment and the using time of computers were expensive and restricted, and on the other hand, they required an advance computational languages that not everybody wanted to learn. Now a days most college freshman own a personal computer (PC or laptop), and/or have access to more sophisticated computational systems than those available for research in the early 80's. The resource availability resulted in a mayor access to all kind models. Today computer speed and time and the algorithms does not seem to be a problem, even though some models take days to run in small computational systems. Almost every oceanographic institution has their own model, what is more, in the same institution from one office to the next there are different models for the same phenomena, developed by different research member, the results does not differ substantially since the equations are the same, and the solving algorithms are similar. The algorithms and the grids, constructed with algorithms, can be found in text books and/or over the internet. Every year more sophisticated models are constructed. The Proper Orthogonal Decomposition is a technique that allows the reduction of the number of variables to solve keeping the model properties, for which it can be a very useful tool in diminishing the processes that have to be solved using "small" computational systems, making sophisticated models available for a greater community.

  8. Mathematical construction and perturbation analysis of Zernike discrete orthogonal points.

    PubMed

    Shi, Zhenguang; Sui, Yongxin; Liu, Zhenyu; Peng, Ji; Yang, Huaijiang

    2012-06-20

    Zernike functions are orthogonal within the unit circle, but they are not over the discrete points such as CCD arrays or finite element grids. This will result in reconstruction errors for loss of orthogonality. By using roots of Legendre polynomials, a set of points within the unit circle can be constructed so that Zernike functions over the set are discretely orthogonal. Besides that, the location tolerances of the points are studied by perturbation analysis, and the requirements of the positioning precision are not very strict. Computer simulations show that this approach provides a very accurate wavefront reconstruction with the proposed sampling set.

  9. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space. This three-term decomposition brings a detectability boost compared to the full-frame standard PCA approach, especially in the small inner working angle region where complex speckle noise prevents PCA from discerning true companions from noise.

  10. Fault Detection of a Roller-Bearing System through the EMD of a Wavelet Denoised Signal

    PubMed Central

    Ahn, Jong-Hyo; Kwak, Dae-Ho; Koh, Bong-Hwan

    2014-01-01

    This paper investigates fault detection of a roller bearing system using a wavelet denoising scheme and proper orthogonal value (POV) of an intrinsic mode function (IMF) covariance matrix. The IMF of the bearing vibration signal is obtained through empirical mode decomposition (EMD). The signal screening process in the wavelet domain eliminates noise-corrupted portions that may lead to inaccurate prognosis of bearing conditions. We segmented the denoised bearing signal into several intervals, and decomposed each of them into IMFs. The first IMF of each segment is collected to become a covariance matrix for calculating the POV. We show that covariance matrices from healthy and damaged bearings exhibit different POV profiles, which can be a damage-sensitive feature. We also illustrate the conventional approach of feature extraction, of observing the kurtosis value of the measured signal, to compare the functionality of the proposed technique. The study demonstrates the feasibility of wavelet-based de-noising, and shows through laboratory experiments that tracking the proper orthogonal values of the covariance matrix of the IMF can be an effective and reliable measure for monitoring bearing fault. PMID:25196008

  11. Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.

    2015-01-01

    This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.

  12. Analysis of recoverable current from one component of magnetic flux density in MREIT and MRCDI.

    PubMed

    Park, Chunjae; Lee, Byung Il; Kwon, Oh In

    2007-06-07

    Magnetic resonance current density imaging (MRCDI) provides a current density image by measuring the induced magnetic flux density within the subject with a magnetic resonance imaging (MRI) scanner. Magnetic resonance electrical impedance tomography (MREIT) has been focused on extracting some useful information of the current density and conductivity distribution in the subject Omega using measured B(z), one component of the magnetic flux density B. In this paper, we analyze the map Tau from current density vector field J to one component of magnetic flux density B(z) without any assumption on the conductivity. The map Tau provides an orthogonal decomposition J = J(P) + J(N) of the current J where J(N) belongs to the null space of the map Tau. We explicitly describe the projected current density J(P) from measured B(z). Based on the decomposition, we prove that B(z) data due to one injection current guarantee a unique determination of the isotropic conductivity under assumptions that the current is two-dimensional and the conductivity value on the surface is known. For a two-dimensional dominating current case, the projected current density J(P) provides a good approximation of the true current J without accumulating noise effects. Numerical simulations show that J(P) from measured B(z) is quite similar to the target J. Biological tissue phantom experiments compare J(P) with the reconstructed J via the reconstructed isotropic conductivity using the harmonic B(z) algorithm.

  13. Understanding Kelvin-Helmholtz instability in paraffin-based hybrid rocket fuels

    NASA Astrophysics Data System (ADS)

    Petrarolo, Anna; Kobald, Mario; Schlechtriem, Stefan

    2018-04-01

    Liquefying fuels show higher regression rates than the classical polymeric ones. They are able to form, along their burning surface, a low viscosity and surface tension liquid layer, which can become unstable (Kelvin-Helmholtz instability) due to the high velocity gas flow in the fuel port. This causes entrainment of liquid droplets from the fuel surface into the oxidizer gas flow. To better understand the droplets entrainment mechanism, optical investigations on the combustion behaviour of paraffin-based hybrid rocket fuels in combination with gaseous oxygen have been conducted in the framework of this research. Combustion tests were performed in a 2D single-slab burner at atmospheric conditions. High speed videos were recorded and analysed with two decomposition techniques. Proper orthogonal decomposition (POD) and independent component analysis (ICA) were applied to the scalar field of the flame luminosity. The most excited frequencies and wavelengths of the wave-like structures characterizing the liquid melt layer were computed. The fuel slab viscosity and the oxidizer mass flow were varied to study their influence on the liquid layer instability process. The combustion is dominated by periodic, wave-like structures for all the analysed fuels. Frequencies and wavelengths characterizing the liquid melt layer depend on the fuel viscosity and oxidizer mass flow. Moreover, for very low mass flows, no wavelength peaks are detected for the higher viscosity fuels. This is important to better understand and predict the onset and development of the entrainment process, which is connected to the amplification of the longitudinal waves.

  14. Singular value decomposition utilizing parallel algorithms on graphical processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotas, Charlotte W; Barhen, Jacob

    2011-01-01

    One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements formore » a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder transformations, and then diagonalizes the intermediate bidiagonal matrix through implicit QR shifts. This is similar to that implemented for real matrices by Lahabar and Narayanan ("Singular Value Decomposition on GPU using CUDA", IEEE International Parallel Distributed Processing Symposium 2009). The implementation is done in a hybrid manner, with the bidiagonalization stage done using the GPU while the diagonalization stage is done using the CPU, with the GPU used to update the U and V matrices. The second algorithm is based on a one-sided Jacobi scheme utilizing a sequence of pair-wise column orthogonalizations such that A is replaced by AV until the resulting matrix is sufficiently orthogonal (that is, equal to U ). V is obtained from the sequence of orthogonalizations, while can be found from the square root of the diagonal elements of AH A and, once is known, U can be found from column scaling the resulting matrix. These implementations utilize CUDA Fortran and NVIDIA's CUB LAS library. The primary goal of this study is to quantify the comparative performance of these two techniques against themselves and other standard implementations (for example, MATLAB). Considering that there is significant overhead associated with transferring data to the GPU and with synchronization between the GPU and the host CPU, it is also important to understand when it is worthwhile to use the GPU in terms of the matrix size and number of concurrent SVDs to be calculated.« less

  15. Localized motion in random matrix decomposition of complex financial systems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian

    2017-04-01

    With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.

  16. Unsteady features of the flow on a bump in transonic environment

    NASA Astrophysics Data System (ADS)

    Budovsky, A. D.; Sidorenko, A. A.; Polivanov, P. A.; Vishnyakov, O. I.; Maslov, A. A.

    2016-10-01

    The study deals with experimental investigation of unsteady features of separated flow on a profiled bump in transonic environment. The experiments were conducted in T-325 wind tunnel of ITAM for the following flow conditions: P0 = 1 bar, T0 = 291 K. The base flow around the model was studied by schlieren visualization, steady and unsteady wall pressure measurements and PIV. The experimentally data obtained using PIV are analyzed by Proper Orthogonal Decomposition (POD) technique to investigate the underlying unsteady flow organization, as revealed by the POD eigenmodes. The data obtained show that flow pulsations revealed upstream and downstream of shock wave are correlated and interconnected.

  17. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  18. Young—Capelli symmetrizers in superalgebras†

    PubMed Central

    Brini, Andrea; Teolis, Antonio G. B.

    1989-01-01

    Let Supern[U [unk] V] be the nth homogeneous subspace of the supersymmetric algebra of U [unk] V, where U and V are Z2-graded vector spaces over a field K of characteristic zero. The actions of the general linear Lie superalgebras pl(U) and pl(V) span two finite-dimensional K-subalgebras B and [unk] of EndK(Supern[U [unk] V]) that are the centralizers of each other. Young—Capelli symmetrizers and Young—Capelli *-symmetrizers give rise to K-linear bases of B and [unk] containing orthogonal systems of idempotents; thus they yield complete decompositions of B and [unk] into minimal left and right ideals, respectively. PMID:16594014

  19. Explicit treatment for Dirichlet, Neumann and Cauchy boundary conditions in POD-based reduction of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2018-05-01

    In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.

  20. Regional Morphology Analysis Package (RMAP): Empirical Orthogonal Function Analysis, Background and Examples

    DTIC Science & Technology

    2007-10-01

    1984. Complex principal component analysis : Theory and examples. Journal of Climate and Applied Meteorology 23: 1660-1673. Hotelling, H. 1933...Sediments 99. ASCE: 2,566-2,581. Von Storch, H., and A. Navarra. 1995. Analysis of climate variability. Applications of statistical techniques. Berlin...ERDC TN-SWWRP-07-9 October 2007 Regional Morphology Empirical Analysis Package (RMAP): Orthogonal Function Analysis , Background and Examples by

  1. Exploratory Bi-factor Analysis: The Oblique Case.

    PubMed

    Jennrich, Robert I; Bentler, Peter M

    2012-07-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (Psychometrika 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (Psychometrika 76:537-549, 2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bifactor rotation criterion designed to produce a rotated loading matrix that has an approximate bi-factor structure. Among other things this can be used as an aid in finding an explicit bi-factor structure for use in a confirmatory bi-factor analysis. They considered only orthogonal rotation. The purpose of this paper is to consider oblique rotation and to compare it to orthogonal rotation. Because there are many more oblique rotations of an initial loading matrix than orthogonal rotations, one expects the oblique results to approximate a bi-factor structure better than orthogonal rotations and this is indeed the case. A surprising result arises when oblique bi-factor rotation methods are applied to ideal data.

  2. Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains.

    PubMed

    Onken, Arno; Liu, Jian K; Karunasekara, P P Chamanthi R; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano

    2016-11-01

    Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.

  3. Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains

    PubMed Central

    Onken, Arno; Liu, Jian K.; Karunasekara, P. P. Chamanthi R.; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano

    2016-01-01

    Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding. PMID:27814363

  4. Decadal period external magnetic field variations determined via eigenanalysis

    NASA Astrophysics Data System (ADS)

    Shore, R. M.; Whaler, K. A.; Macmillan, S.; Beggan, C.; Velímský, J.; Olsen, N.

    2016-06-01

    We perform a reanalysis of hourly mean magnetic data from ground-based observatories spanning 1997-2009 inclusive, in order to isolate (after removal of core and crustal field estimates) the spatiotemporal morphology of the external fields important to mantle induction, on (long) periods of months to a full solar cycle. Our analysis focuses on geomagnetically quiet days and middle to low latitudes. We use the climatological eigenanalysis technique called empirical orthogonal functions (EOFs), which allows us to identify discrete spatiotemporal patterns with no a priori specification of their geometry -- the form of the decomposition is controlled by the data. We apply a spherical harmonic analysis to the EOF outputs in a joint inversion for internal and external coefficients. The results justify our assumption that the EOF procedure responds primarily to the long-period external inducing field contributions. Though we cannot determine uniquely the contributory source regions of these inducing fields, we find that they have distinct temporal characteristics which enable some inference of sources. An identified annual-period pattern appears to stem from a north-south seasonal motion of the background mean external field distribution. Separate patterns of semiannual and solar-cycle-length periods appear to stem from the amplitude modulations of spatially fixed background fields.

  5. Synthesis, characterization and photocatalytic activity of neodymium carbonate and neodymium oxide nanoparticles

    NASA Astrophysics Data System (ADS)

    Pourmortazavi, Seied Mahdi; Rahimi-Nasrabadi, Mehdi; Aghazadeh, Mustafa; Ganjali, Mohammad Reza; Karimi, Meisam Sadeghpour; Norouzi, Parviz

    2017-12-01

    This work focuses on the application of an orthogonal array design to the optimization of the facile direct carbonization reaction for the synthesis of neodymium carbonate nanoparticles, were the product particles are prepared based on the direct precipitation of their ingredients. To optimize the method the influences of the major operating conditions on the dimensions of the neodymium carbonate particles were quantitatively evaluated through the analysis of variance (ANOVA). It was observed that the crystalls of the carbonate salt can be synthesized by controlling neodymium concentration and flow rate, as well as reactor temperature. Based on the results of ANOVA, 0.03 M, 2.5 mL min-1 and 30 °C are the optimum values for the above-mentioend parameters and controlling the parameters at these values yields nanoparticles with the sizes of about of 31 ± 2 nm. The product of this former stage was next used as the feed for a thermal decomposition procedure which yielding neodymium oxide nanoparticles. The products were studied through X-ray diffraction (XRD), SEM, TEM, FT-IR and thermal analysis techniques. In addition, the photocatalytic activity of dyspersium carbonate and dyspersium oxide nanoparticles were investigated using degradation of methyl orange (MO) under ultraviolet light.

  6. [Glossary of terms used by radiologists in image processing].

    PubMed

    Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.

  7. Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.

  8. Self-consistent asset pricing models

    NASA Astrophysics Data System (ADS)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the self-consistency condition derives a risk-factor decomposition in the multi-factor case which is identical to the principal component analysis (PCA), thus providing a direct link between model-driven and data-driven constructions of risk factors. This correspondence shows that PCA will therefore suffer from the same limitations as the CAPM and its multi-factor generalization, namely lack of out-of-sample explanatory power and predictability. In the multi-period context, the self-consistency conditions force the betas to be time-dependent with specific constraints.

  9. Dynamics of flow control in an emulated boundary layer-ingesting offset diffuser

    NASA Astrophysics Data System (ADS)

    Gissen, A. N.; Vukasinovic, B.; Glezer, A.

    2014-08-01

    Dynamics of flow control comprised of arrays of active (synthetic jets) and passive (vanes) control elements , and its effectiveness for suppression of total-pressure distortion is investigated experimentally in an offset diffuser, in the absence of internal flow separation. The experiments are conducted in a wind tunnel inlet model at speeds up to M = 0.55 using approach flow conditioning that mimics boundary layer ingestion on a Blended-Wing-Body platform. Time-dependent distortion of the dynamic total-pressure field at the `engine face' is measured using an array of forty total-pressure probes, and the control-induced distortion changes are analyzed using triple decomposition and proper orthogonal decomposition (POD). These data indicate that an array of the flow control small-scale synthetic jet vortices merge into two large-scale, counter-rotating streamwise vortices that exert significant changes in the flow distortion. The two most energetic POD modes appear to govern the distortion dynamics in either active or hybrid flow control approaches. Finally, it is shown that the present control approach is sufficiently robust to reduce distortion with different inlet conditions of the baseline flow.

  10. Assessment of swirl spray interaction in lab scale combustor using time-resolved measurements

    NASA Astrophysics Data System (ADS)

    Rajamanickam, Kuppuraj; Jain, Manish; Basu, Saptarshi

    2017-11-01

    Liquid fuel injection in highly turbulent swirling flows becomes common practice in gas turbine combustors to improve the flame stabilization. It is well known that the vortex bubble breakdown (VBB) phenomenon in strong swirling jets exhibits complicated flow structures in the spatial domain. In this study, the interaction of hollow cone liquid sheet with such coaxial swirling flow field has been studied experimentally using time-resolved measurements. In particular, much attention is focused towards the near field breakup mechanism (i.e. primary atomization) of liquid sheet. The detailed swirling gas flow field characterization is carried out using time-resolved PIV ( 3.5 kHz). Furthermore, the complicated breakup mechanisms and interaction of the liquid sheet are imaged with the help of high-speed shadow imaging system. Subsequently, proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD) is implemented over the instantaneous data sets to retrieve the modal information associated with the interaction dynamics. This helps to delineate more quantitative nature of interaction process between the liquid sheet and swirling gas phase flow field.

  11. An Application of Rotation- and Translation-Invariant Overcomplete Wavelets to the registration of Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Zavorine, Ilya

    1999-01-01

    A wavelet-based image registration approach has previously been proposed by the authors. In this work, wavelet coefficient maxima obtained from an orthogonal wavelet decomposition using Daubechies filters were utilized to register images in a multi-resolution fashion. Tested on several remote sensing datasets, this method gave very encouraging results. Despite the lack of translation-invariance of these filters, we showed that when using cross-correlation as a feature matching technique, features of size larger than twice the size of the filters are correctly registered by using the low-frequency subbands of the Daubechies wavelet decomposition. Nevertheless, high-frequency subbands are still sensitive to translation effects. In this work, we are considering a rotation- and translation-invariant representation developed by E. Simoncelli and integrate it in our image registration scheme. The two types of filters, Daubechies and Simoncelli filters, are then being compared from a registration point of view, utilizing synthetic data as well as data from the Landsat/ Thematic Mapper (TM) and from the NOAA Advanced Very High Resolution Radiometer (AVHRR).

  12. An Application of Rotation- and Translation-Invariant Overcomplete Wavelets to the Registration of Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Zavorine, Ilya

    1999-01-01

    A wavelet-based image registration approach has previously been proposed by the authors. In this work, wavelet coefficient maxima obtained from an orthogonal wavelet decomposition using Daubechies filters were utilized to register images in a multi-resolution fashion. Tested on several remote sensing datasets, this method gave very encouraging results. Despite the lack of translation-invariance of these filters, we showed that when using cross-correlation as a feature matching technique, features of size larger than twice the size of the filters are correctly registered by using the low-frequency subbands of the Daubechies wavelet decomposition. Nevertheless, high-frequency subbands are still sensitive to translation effects. In this work, we are considering a rotation- and translation-invariant representation developed by E. Simoncelli and integrate it in our image registration scheme. The two types of filters, Daubechies and Simoncelli filters, are then being compared from a registration point of view, utilizing synthetic data as well as data from the Landsat/ Thematic Mapper (TM) and from the NOAA Advanced Very High Resolution Radiometer (AVHRR).

  13. A low dimensional dynamical system for the wall layer

    NASA Technical Reports Server (NTRS)

    Aubry, N.; Keefe, L. R.

    1987-01-01

    Low dimensional dynamical systems which model a fully developed turbulent wall layer were derived.The model is based on the optimally fast convergent proper orthogonal decomposition, or Karhunen-Loeve expansion. This decomposition provides a set of eigenfunctions which are derived from the autocorrelation tensor at zero time lag. Via Galerkin projection, low dimensional sets of ordinary differential equations in time, for the coefficients of the expansion, were derived from the Navier-Stokes equations. The energy loss to the unresolved modes was modeled by an eddy viscosity representation, analogous to Heisenberg's spectral model. A set of eigenfunctions and eigenvalues were obtained from direct numerical simulation of a plane channel at a Reynolds number of 6600, based on the mean centerline velocity and the channel width flow and compared with previous work done by Herzog. Using the new eigenvalues and eigenfunctions, a new ten dimensional set of ordinary differential equations were derived using five non-zero cross-stream Fourier modes with a periodic length of 377 wall units. The dynamical system was integrated for a range of the eddy viscosity prameter alpha. This work is encouraging.

  14. Modal Structures in flow past a cylinder

    NASA Astrophysics Data System (ADS)

    Murshed, Mohammad

    2017-11-01

    With the advent of data, there have been opportunities to apply formalism to detect patterns or simple relations. For instance, a phenomenon can be defined through a partial differential equation which may not be very useful right away, whereas a formula for the evolution of a primary variable may be interpreted quite easily. Having access to data is not enough to move on since doing advanced linear algebra can put strain on the way computations are being done. A canonical problem in the field of aerodynamics is the transient flow past a cylinder where the viscosity can be adjusted to set the Reynolds number (Re). We observe the effect of the critical Re on the certain modes of behavior in time scale. A 2D-velocity field works as an input to analyze the modal structure of the flow using the Proper Orthogonal Decomposition and Koopman Mode/Dynamic Mode Decomposition. This will enable prediction of the solution further in time (taking into account the dependence on Re) and help us evaluate and discuss the associated error in the mechanism.

  15. Time-frequency analysis : mathematical analysis of the empirical mode decomposition.

    DOT National Transportation Integrated Search

    2009-01-01

    Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...

  16. On the equivalence of dynamically orthogonal and bi-orthogonal methods: Theory and numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu

    2014-08-01

    The Karhunen–Lòeve (KL) decomposition provides a low-dimensional representation for random fields as it is optimal in the mean square sense. Although for many stochastic systems of practical interest, described by stochastic partial differential equations (SPDEs), solutions possess this low-dimensional character, they also have a strongly time-dependent form and to this end a fixed-in-time basis may not describe the solution in an efficient way. Motivated by this limitation of standard KL expansion, Sapsis and Lermusiaux (2009) [26] developed the dynamically orthogonal (DO) field equations which allow for the simultaneous evolution of both the spatial basis where uncertainty ‘lives’ but also themore » stochastic characteristics of uncertainty. Recently, Cheng et al. (2013) [28] introduced an alternative approach, the bi-orthogonal (BO) method, which performs the exact same tasks, i.e. it evolves the spatial basis and the stochastic characteristics of uncertainty. In the current work we examine the relation of the two approaches and we prove theoretically and illustrate numerically their equivalence, in the sense that one method is an exact reformulation of the other. We show this by deriving a linear and invertible transformation matrix described by a matrix differential equation that connects the BO and the DO solutions. We also examine a pathology of the BO equations that occurs when two eigenvalues of the solution cross, resulting in an instantaneous, infinite-speed, internal rotation of the computed spatial basis. We demonstrate that despite the instantaneous duration of the singularity this has important implications on the numerical performance of the BO approach. On the other hand, it is observed that the BO is more stable in nonlinear problems involving a relatively large number of modes. Several examples, linear and nonlinear, are presented to illustrate the DO and BO methods as well as their equivalence.« less

  17. Greedy algorithms for diffuse optical tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of computational efficiency. The main advantage of this study is that the forward diffusion equation solver need not be repeatedly solved.

  18. Spatio-Temporal Evolutions of Non-Orthogonal Equatorial Wave Modes Derived from Observations

    NASA Astrophysics Data System (ADS)

    Barton, C.; Cai, M.

    2015-12-01

    Equatorial waves have been studied extensively due to their importance to the tropical climate and weather systems. Historically, their activity is diagnosed mainly in the wavenumber-frequency domain. Recently, many studies have projected observational data onto parabolic cylinder functions (PCF), which represent the meridional structure of individual wave modes, to attain time-dependent spatial wave structures. In this study, we propose a methodology that seeks to identify individual wave modes in instantaneous fields of observations by determining their projections on PCF modes according to the equatorial wave theory. The new method has the benefit of yielding a closed system with a unique solution for all waves' spatial structures, including IG waves, for a given instantaneous observed field. We have applied our method to the ERA-Interim reanalysis dataset in the tropical stratosphere where the wave-mean flow interaction mechanism for the quasi-biennial oscillation (QBO) is well-understood. We have confirmed the continuous evolution of the selection mechanism for equatorial waves in the stratosphere from observations as predicted by the theory for the QBO. This also validates the proposed method for decomposition of observed tropical wave fields into non-orthogonal equatorial wave modes.

  19. Quantitative tissue polarimetry using polar decomposition of 3 x 3 Mueller matrix

    NASA Astrophysics Data System (ADS)

    Swami, M. K.; Manhas, S.; Buddhiwant, P.; Ghosh, N.; Uppal, A.; Gupta, P. K.

    2007-05-01

    Polarization properties of any optical system are completely described by a sixteen-element (4 x 4) matrix called Mueller matrix, which transform the Stokes vector describing the polarization properties of incident light to the stokes vector of scattered light. Measurement of all the elements of the matrix requires a minimum of sixteen measurements involving both linear and circularly polarized light. However, for many diagnostic applications, it would be useful if all the polarization parameters of the medium (depolarization (Δ), differential attenuation of two orthogonal polarizations, that is, diattenuation (d), and differential phase retardance of two orthogonal polarizations, i.e., retardance (δ )) can be quantified with linear polarization measurements alone. In this paper we show that for a turbid medium, like biological tissue, where the depolarization of linearly polarized light arises primarily due to the randomization of the field vector's direction by multiple scattering, the polarization parameters of the medium can be obtained from the nine Mueller matrix elements involving linear polarization measurements only. Use of the approach for measurement of polarization parameters (Δ, d and δ) of normal and malignant (squamous cell carcinoma) tissues resected from human oral cavity are presented.

  20. John Leask Lumley: Whither Turbulence?

    NASA Astrophysics Data System (ADS)

    Leibovich, Sidney; Warhaft, Zellman

    2018-01-01

    John Lumley's contributions to the theory, modeling, and experiments on turbulent flows played a seminal role in the advancement of our understanding of this subject in the second half of the twentieth century. We discuss John's career and his personal style, including his love and deep knowledge of vintage wine and vintage cars. His intellectual contributions range from abstract theory to applied engineering. Here we discuss some of his major advances, focusing on second-order modeling, proper orthogonal decomposition, path-breaking experiments, research on geophysical turbulence, and important contributions to the understanding of drag reduction. John Lumley was also an influential teacher whose books and films have molded generations of students. These and other aspects of his professional career are described.

  1. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models.

    PubMed

    Shah, A A; Xing, W W; Triantafyllidis, V

    2017-04-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.

  2. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models

    PubMed Central

    Xing, W. W.; Triantafyllidis, V.

    2017-01-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327

  3. Spatially coupled catalytic ignition of CO oxidation on Pt: mesoscopic versus nano-scale

    PubMed Central

    Spiel, C.; Vogel, D.; Schlögl, R.; Rupprechter, G.; Suchorski, Y.

    2015-01-01

    Spatial coupling during catalytic ignition of CO oxidation on μm-sized Pt(hkl) domains of a polycrystalline Pt foil has been studied in situ by PEEM (photoemission electron microscopy) in the 10−5 mbar pressure range. The same reaction has been examined under similar conditions by FIM (field ion microscopy) on nm-sized Pt(hkl) facets of a Pt nanotip. Proper orthogonal decomposition (POD) of the digitized FIM images has been employed to analyze spatiotemporal dynamics of catalytic ignition. The results show the essential role of the sample size and of the morphology of the domain (facet) boundary in the spatial coupling in CO oxidation. PMID:26021411

  4. Comparison of common components analysis with principal components analysis and independent components analysis: Application to SPME-GC-MS volatolomic signatures.

    PubMed

    Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N

    2018-02-01

    The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not highlight the most influencing variables for each separation, whereas the ICA Loadings highlighted the same variables as did CCA. This study shows the potential of CCA for the extraction of pertinent information from a data matrix, using a procedure based on an original optimisation criterion, to produce results that are complementary, and in some cases may be superior, to those of PCA and ICA. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Attitudinal Changes of the Student Teacher--A Further Analysis. An Example of an Orthogonal Comparisons Analysis Model Applied to Educational Research.

    ERIC Educational Resources Information Center

    Courtney, E. Wayne

    This report was designed to present an example of a research study involving the use of coefficients of orthogonal comparisons in analysis of variance tests of significance. A sample research report and analysis was included so as to lead the reader through the design steps. The sample study was designed to determine the extent of attitudinal…

  6. New insight in quantitative analysis of vascular permeability during immune reaction (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kalchenko, Vyacheslav; Molodij, Guillaume; Kuznetsov, Yuri; Smolyakov, Yuri; Israeli, David; Meglinski, Igor; Harmelin, Alon

    2016-03-01

    The use of fluorescence imaging of vascular permeability becomes a golden standard for assessing the inflammation process during experimental immune response in vivo. The use of the optical fluorescence imaging provides a very useful and simple tool to reach this purpose. The motivation comes from the necessity of a robust and simple quantification and data presentation of inflammation based on a vascular permeability. Changes of the fluorescent intensity, as a function of time is a widely accepted method to assess the vascular permeability during inflammation related to the immune response. In the present study we propose to bring a new dimension by applying a more sophisticated approach to the analysis of vascular reaction by using a quantitative analysis based on methods derived from astronomical observations, in particular by using a space-time Fourier filtering analysis followed by a polynomial orthogonal modes decomposition. We demonstrate that temporal evolution of the fluorescent intensity observed at certain pixels correlates quantitatively to the blood flow circulation at normal conditions. The approach allows to determine the regions of permeability and monitor both the fast kinetics related to the contrast material distribution in the circulatory system and slow kinetics associated with extravasation of the contrast material. Thus, we introduce a simple and convenient method for fast quantitative visualization of the leakage related to the inflammatory (immune) reaction in vivo.

  7. Computation of ancestry scores with mixed families and unrelated individuals.

    PubMed

    Zhou, Yi-Hui; Marron, James S; Wright, Fred A

    2018-03-01

    The issue of robustness to family relationships in computing genotype ancestry scores such as eigenvector projections has received increased attention in genetic association, and is particularly challenging when sets of both unrelated individuals and closely related family members are included. The current standard is to compute loadings (left singular vectors) using unrelated individuals and to compute projected scores for remaining family members. However, projected ancestry scores from this approach suffer from shrinkage toward zero. We consider two main novel strategies: (i) matrix substitution based on decomposition of a target family-orthogonalized covariance matrix, and (ii) using family-averaged data to obtain loadings. We illustrate the performance via simulations, including resampling from 1000 Genomes Project data, and analysis of a cystic fibrosis dataset. The matrix substitution approach has similar performance to the current standard, but is simple and uses only a genotype covariance matrix, while the family-average method shows superior performance. Our approaches are accompanied by novel ancillary approaches that provide considerable insight, including individual-specific eigenvalue scree plots. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  8. Mechanical Characterization of Polysilicon MEMS: A Hybrid TMCMC/POD-Kriging Approach.

    PubMed

    Mirzazadeh, Ramin; Eftekhar Azam, Saeed; Mariani, Stefano

    2018-04-17

    Microscale uncertainties related to the geometry and morphology of polycrystalline silicon films, constituting the movable structures of micro electro-mechanical systems (MEMS), were investigated through a joint numerical/experimental approach. An on-chip testing device was designed and fabricated to deform a compliant polysilicon beam. In previous studies, we showed that the scattering in the input–output characteristics of the device can be properly described only if statistical features related to the morphology of the columnar polysilicon film and to the etching process adopted to release the movable structure are taken into account. In this work, a high fidelity finite element model of the device was used to feed a transitional Markov chain Monte Carlo (TMCMC) algorithm for the estimation of the unknown parameters governing the aforementioned statistical features. To reduce the computational cost of the stochastic analysis, a synergy of proper orthogonal decomposition (POD) and kriging interpolation was adopted. Results are reported for a batch of nominally identical tested devices, in terms of measurement error-affected probability distributions of the overall Young’s modulus of the polysilicon film and of the overetch depth.

  9. Three Dimensional Plenoptic PIV Measurements of a Turbulent Boundary Layer Overlying a Hemispherical Roughness Element

    NASA Astrophysics Data System (ADS)

    Johnson, Kyle; Thurow, Brian; Kim, Taehoon; Blois, Gianluca; Christensen, Kenneth

    2016-11-01

    Three-dimensional, three-component (3D-3C) measurements were made using a plenoptic camera on the flow around a roughness element immersed in a turbulent boundary layer. A refractive index matched approach allowed whole-field optical access from a single camera to a measurement volume that includes transparent solid geometries. In particular, this experiment measures the flow over a single hemispherical roughness element made of acrylic and immersed in a working fluid consisting of Sodium Iodide solution. Our results demonstrate that plenoptic particle image velocimetry (PIV) is a viable technique to obtaining statistically-significant volumetric velocity measurements even in a complex separated flow. The boundary layer to roughness height-ratio of the flow was 4.97 and the Reynolds number (based on roughness height) was 4.57×103. Our measurements reveal key flow features such as spiraling legs of the shear layer, a recirculation region, and shed arch vortices. Proper orthogonal decomposition (POD) analysis was applied to the instantaneous velocity and vorticity data to extract these features. Supported by the National Science Foundation Grant No. 1235726.

  10. Plasma-Surface Interactions and RF Antennas

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Smithe, D. N.; Beckwith, K.; Davidson, B. D.; Kruger, S. E.; Pankin, A. Y.; Roark, C. M.

    2015-11-01

    Implementation of recently developed finite-difference time-domain (FDTD) modeling techniques on high-performance computing platforms allows RF power flow, and antenna near- and far-field behavior, to be studied in realistic experimental ion-cyclotron resonance heating scenarios at previously inaccessible levels of resolution. We present results and 3D animations of high-performance (10k-100k core) FDTD simulations of Alcator C-Mod's field-aligned ICRF antenna on the Titan supercomputer, considering (a) the physics of slow wave excitation in the immediate vicinity of the antenna hardware and in the scrape-off layer for various edge densities, and (b) sputtering and impurity production, as driven by self-consistent sheath potentials at antenna surfaces. Related research efforts in low-temperature plasma modeling, including the use of proper orthogonal decomposition methods for PIC/fluid modeling and the development of plasma chemistry tools (e.g. a robust and flexible reaction database, principal path reduction analysis capabilities, and improved visualization options), will also be summarized. Supported by U.S. DoE SBIR Phase I/II Award DE-SC0009501 and ALCC/OLCF.

  11. Simple techniques for improving deep neural network outcomes on commodity hardware

    NASA Astrophysics Data System (ADS)

    Colina, Nicholas Christopher A.; Perez, Carlos E.; Paraan, Francis N. C.

    2017-08-01

    We benchmark improvements in the performance of deep neural networks (DNN) on the MNIST data test upon imple-menting two simple modifications to the algorithm that have little overhead computational cost. First is GPU parallelization on a commodity graphics card, and second is initializing the DNN with random orthogonal weight matrices prior to optimization. Eigenspectra analysis of the weight matrices reveal that the initially orthogonal matrices remain nearly orthogonal after training. The probability distributions from which these orthogonal matrices are drawn are also shown to significantly affect the performance of these deep neural networks.

  12. Enstrophy-based proper orthogonal decomposition of flow past rotating cylinder at super-critical rotating rate

    NASA Astrophysics Data System (ADS)

    Sengupta, Tapan K.; Gullapalli, Atchyut

    2016-11-01

    Spinning cylinder rotating about its axis experiences a transverse force/lift, an account of this basic aerodynamic phenomenon is known as the Robins-Magnus effect in text books. Prandtl studied this flow by an inviscid irrotational model and postulated an upper limit of the lift experienced by the cylinder for a critical rotation rate. This non-dimensional rate is the ratio of oncoming free stream speed and the surface speed due to rotation. Prandtl predicted a maximum lift coefficient as CLmax = 4π for the critical rotation rate of two. In recent times, evidences show the violation of this upper limit, as in the experiments of Tokumaru and Dimotakis ["The lift of a cylinder executing rotary motions in a uniform flow," J. Fluid Mech. 255, 1-10 (1993)] and in the computed solution in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)]. In the latter reference, this was explained as the temporal instability affecting the flow at higher Reynolds number and rotation rates (>2). Here, we analyze the flow past a rotating cylinder at a super-critical rotation rate (=2.5) by the enstrophy-based proper orthogonal decomposition (POD) of direct simulation results. POD identifies the most energetic modes and helps flow field reconstruction by reduced number of modes. One of the motivations for the present study is to explain the shedding of puffs of vortices at low Reynolds number (Re = 60), for the high rotation rate, due to an instability originating in the vicinity of the cylinder, using the computed Navier-Stokes equation (NSE) from t = 0 to t = 300 following an impulsive start. This instability is also explained through the disturbance mechanical energy equation, which has been established earlier in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)].

  13. Full Wave Analysis of Passive Microwave Monolithic Integrated Circuit Devices Using a Generalized Finite Difference Time Domain (GFDTD) Algorithm

    NASA Technical Reports Server (NTRS)

    Lansing, Faiza S.; Rascoe, Daniel L.

    1993-01-01

    This paper presents a modified Finite-Difference Time-Domain (FDTD) technique using a generalized conformed orthogonal grid. The use of the Conformed Orthogonal Grid, Finite Difference Time Domain (GFDTD) enables the designer to match all the circuit dimensions, hence eliminating a major source o error in the analysis.

  14. Direct and Indirect Effects of UV-B Exposure on Litter Decomposition: A Meta-Analysis

    PubMed Central

    Song, Xinzhang; Peng, Changhui; Jiang, Hong; Zhu, Qiuan; Wang, Weifeng

    2013-01-01

    Ultraviolet-B (UV-B) exposure in the course of litter decomposition may have a direct effect on decomposition rates via changing states of photodegradation or decomposer constitution in litter while UV-B exposure during growth periods may alter chemical compositions and physical properties of plants. Consequently, these changes will indirectly affect subsequent litter decomposition processes in soil. Although studies are available on both the positive and negative effects (including no observable effects) of UV-B exposure on litter decomposition, a comprehensive analysis leading to an adequate understanding remains unresolved. Using data from 93 studies across six biomes, this introductory meta-analysis found that elevated UV-B directly increased litter decomposition rates by 7% and indirectly by 12% while attenuated UV-B directly decreased litter decomposition rates by 23% and indirectly increased litter decomposition rates by 7%. However, neither positive nor negative effects were statistically significant. Woody plant litter decomposition seemed more sensitive to UV-B than herbaceous plant litter except under conditions of indirect effects of elevated UV-B. Furthermore, levels of UV-B intensity significantly affected litter decomposition response to UV-B (P<0.05). UV-B effects on litter decomposition were to a large degree compounded by climatic factors (e.g., MAP and MAT) (P<0.05) and litter chemistry (e.g., lignin content) (P<0.01). Results suggest these factors likely have a bearing on masking the important role of UV-B on litter decomposition. No significant differences in UV-B effects on litter decomposition were found between study types (field experiment vs. laboratory incubation), litter forms (leaf vs. needle), and decay duration. Indirect effects of elevated UV-B on litter decomposition significantly increased with decay duration (P<0.001). Additionally, relatively small changes in UV-B exposure intensity (30%) had significant direct effects on litter decomposition (P<0.05). The intent of this meta-analysis was to improve our understanding of the overall effects of UV-B on litter decomposition. PMID:23818993

  15. Solid state gas sensors for detection of explosives and explosive precursors

    NASA Astrophysics Data System (ADS)

    Chu, Yun

    The increased number of terrorist attacks using improvised explosive devices (IEDs) over the past few years has made the trace detection of explosives a priority for the Department of Homeland Security. Considerable advances in early detection of trace explosives employing spectroscopic detection systems and other sensing devices have been made and have demonstrated outstanding performance. However, modern IEDs are not easily detectable by conventional methods and terrorists have adapted to avoid using metallic or nitro groups in the manufacturing of IEDs. Instead, more powerful but smaller compounds, such as TATP are being more frequently used. In addition, conventional detection techniques usually require large capital investment, labor costs and energy input and are incapable of real-time identification, limiting their application. Thus, a low cost detection system which is capable of continuous online monitoring in a passive mode is needed for explosive detection. In this dissertation, a thermodynamic based thin film gas sensor which can reliably detect various explosive compounds was developed and demonstrated. The principle of the sensors is based on measuring the heat effect associated with the catalytic decomposition of explosive compounds present in the vapor phase. The decomposition mechanism is complicated and not well known, but it can be affected by many parameters including catalyst, reaction temperature and humidity. Explosives that have relatively high vapor pressure and readily sublime at room temperature, like TATP and 2, 6-DNT, are ideal candidate for vapor phase detection using the thermodynamic gas sensor. ZnO, W2O 3, V2O5 and SnO2 were employed as catalysts. This sensor exhibited promising sensitivity results for TATP, but poor selectivity among peroxide based compounds. In order to improve the sensitivity and selectivity of the thermodynamic sensor, a Pd:SnO2 nanocomposite was fabricated and tested as part of this dissertation. A combinatorial chemistry techniques were used for catalyst discovery. Specially, a series of tin oxide catalysts with continuous varying composition of palladium were fabricated to screen for the optimum Pd loading to maximize specificity. Experimental results suggested that sensors with a 12 wt.% palladium loading generated the highest sensitivity while a 8 wt.% palladium loading provided greatest selectivity. XPS and XRD were used to study how palladium doping level affects the oxidation state and crystal structure of the nanocomposite catalyst. As with any passive detection system, a necessary theme of this dissertation was the mitigation of false positive. Toward this end, an orthogonal detection system comprised of two independent sensing platforms sharing one catalyst was demonstrated using TATP, 2, 6-DNT and ammonium nitrate as target molecules. The orthogonal sensor incorporated a thermodynamic based sensing platform to measure the heat effect associated with the decomposition of explosive molecules, and a conductometric sensing platform that monitors the change in electrical conductivity of the same catalyst when exposed to the explosive substances. Results indicate that the orthogonal sensor generates an effective response to explosives presented at part per billion level. In addition, with two independent sensing platforms, a built-in redundancy of results could be expected to minimize false positive.

  16. A novel anisotropic inversion approach for magnetotelluric data from subsurfaces with orthogonal geoelectric strike directions

    NASA Astrophysics Data System (ADS)

    Schmoldt, Jan-Philipp; Jones, Alan G.

    2013-12-01

    The key result of this study is the development of a novel inversion approach for cases of orthogonal, or close to orthogonal, geoelectric strike directions at different depth ranges, for example, crustal and mantle depths. Oblique geoelectric strike directions are a well-known issue in commonly employed isotropic 2-D inversion of MT data. Whereas recovery of upper (crustal) structures can, in most cases, be achieved in a straightforward manner, deriving lower (mantle) structures is more challenging with isotropic 2-D inversion in the case of an overlying region (crust) with different geoelectric strike direction. Thus, investigators may resort to computationally expensive and more limited 3-D inversion in order to derive the electric resistivity distribution at mantle depths. In the novel approaches presented in this paper, electric anisotropy is used to image 2-D structures in one depth range, whereas the other region is modelled with an isotropic 1-D or 2-D approach, as a result significantly reducing computational costs of the inversion in comparison with 3-D inversion. The 1- and 2-D versions of the novel approach were tested using a synthetic 3-D subsurface model with orthogonal strike directions at crust and mantle depths and their performance was compared to results of isotropic 2-D inversion. Structures at crustal depths were reasonably well recovered by all inversion approaches, whereas recovery of mantle structures varied significantly between the different approaches. Isotropic 2-D inversion models, despite decomposition of the electric impedance tensor and using a wide range of inversion parameters, exhibited severe artefacts thereby confirming the requirement of either an enhanced or a higher dimensionality inversion approach. With the anisotropic 1-D inversion approach, mantle structures of the synthetic model were recovered reasonably well with anisotropy values parallel to the mantle strike direction (in this study anisotropy was assigned to the mantle region), indicating applicability of the novel approach for basic subsurface cases. For the more complex subsurface cases, however, the anisotropic 1-D inversion approach is likely to yield implausible models of the electric resistivity distribution due to inapplicability of the 1-D approximation. Owing to the higher number of degrees of freedom, the anisotropic 2-D inversion approach can cope with more complex subsurface cases and is the recommended tool for real data sets recorded in regions with orthogonal geoelectric strike directions.

  17. Stockholder projector analysis: A Hilbert-space partitioning of the molecular one-electron density matrix with orthogonal projectors

    NASA Astrophysics Data System (ADS)

    Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel

    2012-01-01

    A previously introduced partitioning of the molecular one-electron density matrix over atoms and bonds [D. Vanfleteren et al., J. Chem. Phys. 133, 231103 (2010)] is investigated in detail. Orthogonal projection operators are used to define atomic subspaces, as in Natural Population Analysis. The orthogonal projection operators are constructed with a recursive scheme. These operators are chemically relevant and obey a stockholder principle, familiar from the Hirshfeld-I partitioning of the electron density. The stockholder principle is extended to density matrices, where the orthogonal projectors are considered to be atomic fractions of the summed contributions. All calculations are performed as matrix manipulations in one-electron Hilbert space. Mathematical proofs and numerical evidence concerning this recursive scheme are provided in the present paper. The advantages associated with the use of these stockholder projection operators are examined with respect to covalent bond orders, bond polarization, and transferability.

  18. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Multilayer neural networks for reduced-rank approximation.

    PubMed

    Diamantaras, K I; Kung, S Y

    1994-01-01

    This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.

  20. POD analysis of the instability mode of a low-speed streak in a laminar boundary layer

    NASA Astrophysics Data System (ADS)

    Deng, Si-Chao; Pan, Chong; Wang, Jin-Jun; Rinoshika, Akira

    2017-12-01

    The instability of one single low-speed streak in a zero-pressure-gradient laminar boundary layer is investigated experimentally via both hydrogen bubble visualization and planar particle image velocimetry (PIV) measurement. A single low-speed streak is generated and destabilized by the wake of an interference wire positioned normal to the wall and in the upstream. The downstream development of the streak includes secondary instability and self-reproduction process, which leads to the generation of two additional streaks appearing on either side of the primary one. A proper orthogonal decomposition (POD) analysis of PIV measured velocity field is used to identify the components of the streak instability in the POD mode space: for a sinuous/varicose type of POD mode, its basis functions present anti-symmetric/symmetric distributions about the streak centerline in the streamwise component, and the symmetry condition reverses in the spanwise component. It is further shown that sinuous mode dominates the turbulent kinematic energy (TKE) through the whole streak evolution process, the TKE content first increases along the streamwise direction to a saturation value and then decays slowly. In contrast, varicose mode exhibits a sustained growth of the TKE content, suggesting an increasing competition of varicose instability against sinuous instability.

  1. Coherent and turbulent process analysis of the effects of a longitudinal vortex on boundary layer detachment on a NACA0015 foil

    NASA Astrophysics Data System (ADS)

    Prothin, Sebastien; Djeridi, Henda; Billard, Jean-Yves

    2014-05-01

    In this paper, the influence of a single tip vortex on boundary layer detachment is studied. This study offers a preliminary approach in order to better understand the interaction between a propeller hub vortex and the rudder installed in its wake. This configuration belongs to the field of marine propulsion and encompasses such specific problem as cavitation inception, modification of propulsive performances and induced vibrations. To better understand the complex mechanisms due to propeller-rudder interactions it was decided to emphasize configurations where the hub vortex is generated by an elliptical 3-D foil and is located upstream of a 2-D NACA0015 foil at high incidences for a Reynolds number of 5×105. The physical mechanisms were studied using Time Resolved Stereoscopic Particle Image Velocimetry (TR-SPIV) techniques. Particular attention was paid to the detachment at 25° incidence and a detailed cartography of the mean and turbulent properties of the wake is presented. Proper Orthogonal Decomposition (POD) analysis was applied in order to highlight the unsteady nature of the flow using phase averaging based on the first POD coefficients to characterize the turbulent and coherent process in the near wake of the rudder.

  2. Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Fan, Wei; Cheng, James; Cheng, Hong

    2016-12-01

    Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor and its core tensor is induced. Second, the trace norm of the core tensor is used to replace that of the whole tensor, which leads to two much smaller scale matrix TNM problems. Finally, an efficient alternating direction augmented Lagrangian method is developed to solve our problems. Our CTNM formulation needs only O((R N +NRI)log(√{I N })) observations to reliably recover an N th-order I×I×…×I tensor of n -rank (r,r,…,r) , compared with O(rI N-1 ) observations required by those tensor TNM methods ( I > R ≥ r ). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.

  3. Efficient model reduction of parametrized systems by matrix discrete empirical interpolation

    NASA Astrophysics Data System (ADS)

    Negri, Federico; Manzoni, Andrea; Amsallem, David

    2015-12-01

    In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.

  4. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  5. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  6. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  7. Developing Chemistry and Kinetic Modeling Tools for Low-Temperature Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Beckwith, Kris; Davidson, Bradley; Kruger, Scott; Pankin, Alexei; Roark, Christine; Stoltz, Peter

    2015-09-01

    We discuss the use of proper orthogonal decomposition (POD) methods in VSim, a FDTD plasma simulation code capable of both PIC/MCC and fluid modeling. POD methods efficiently generate smooth representations of noisy self-consistent or test-particle PIC data, and are thus advantageous in computing macroscopic fluid quantities from large PIC datasets (e.g. for particle-based closure computations) and in constructing optimal visual representations of the underlying physics. They may also confer performance advantages for massively parallel simulations, due to the significant reduction in dataset sizes conferred by truncated singular-value decompositions of the PIC data. We also demonstrate how complex LTP chemistry scenarios can be modeled in VSim via an interface with MUNCHKIN, a developing standalone python/C++/SQL code that identifies reaction paths for given input species, solves 1D rate equations for the time-dependent chemical evolution of the system, and generates corresponding VSim input blocks with appropriate cross-sections/reaction rates. MUNCHKIN also computes reaction rates from user-specified distribution functions, and conducts principal path analyses to reduce the number of simulated chemical reactions. Supported by U.S. Department of Energy SBIR program, Award DE-SC0009501.

  8. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  9. Wavefront sensing with all-digital Stokes measurements

    NASA Astrophysics Data System (ADS)

    Dudley, Angela; Milione, Giovanni; Alfano, Robert R.; Forbes, Andrew

    2014-09-01

    A long-standing question in optics has been to efficiently measure the phase (or wavefront) of an optical field. This has led to numerous publications and commercial devices such as phase shift interferometry, wavefront reconstruction via modal decomposition and Shack-Hartmann wavefront sensors. In this work we develop a new technique to extract the phase which in contrast to previously mentioned methods is based on polarization (or Stokes) measurements. We outline a simple, all-digital approach using only a spatial light modulator and a polarization grating to exploit the amplitude and phase relationship between the orthogonal states of polarization to determine the phase of an optical field. We implement this technique to reconstruct the phase of static and propagating optical vortices.

  10. Interannual variation of the Antarctic Ice Sheet from a combined analysis of satellite gravimetry and altimetry data

    NASA Astrophysics Data System (ADS)

    Mémin, A.; Flament, T.; Alizier, B.; Watson, C.; Rémy, F.

    2015-07-01

    Assessment of the long term mass balance of the Antarctic Ice Sheet, and thus the determination of its contribution to sea level rise, requires an understanding of interannual variability and associated causal mechanisms. We performed a combined analysis of surface-mass and elevation changes using data from the GRACE and Envisat satellite missions, respectively. Using empirical orthogonal functions and singular value decompositions of each data set, we find a quasi 4.7-yr periodic signal between 08/2002 and 10/2010 that accounts for ∼ 15- 30% of the time variability of the filtered and detrended surface-mass and elevation data. Computation of the density of this variable mass load corresponds to snow or uncompacted firn. Changes reach maximum amplitude within the first 100 km from the coast where it contributes up to 30-35% of the annual rate of accumulation. Extending the analysis to 09/2014 using surface-mass changes only, we have found anomalies with a periodicity of about 4-6 yrs that circle the AIS in about 9-10 yrs. These properties connect the observed anomalies to the Antarctic Circumpolar Wave (ACW) which is known to affect several key climate variables, including precipitation. It suggests that variability in the surface-mass balance of the Antarctic Ice Sheet may also be modulated by the ACW.

  11. Orthogonal Chirp-Based Ultrasonic Positioning

    PubMed Central

    Khyam, Mohammad Omar; Ge, Shuzhi Sam; Li, Xinde; Pickering, Mark

    2017-01-01

    This paper presents a chirp based ultrasonic positioning system (UPS) using orthogonal chirp waveforms. In the proposed method, multiple transmitters can simultaneously transmit chirp signals, as a result, it can efficiently utilize the entire available frequency spectrum. The fundamental idea behind the proposed multiple access scheme is to utilize the oversampling methodology of orthogonal frequency-division multiplexing (OFDM) modulation and orthogonality of the discrete frequency components of a chirp waveform. In addition, the proposed orthogonal chirp waveforms also have all the advantages of a classical chirp waveform. Firstly, the performance of the waveforms is investigated through correlation analysis and then, in an indoor environment, evaluated through simulations and experiments for ultrasonic (US) positioning. For an operational range of approximately 1000 mm, the positioning root-mean-square-errors (RMSEs) &90% error were 4.54 mm and 6.68 mm respectively. PMID:28448454

  12. Orthogonal Chirp-Based Ultrasonic Positioning.

    PubMed

    Khyam, Mohammad Omar; Ge, Shuzhi Sam; Li, Xinde; Pickering, Mark

    2017-04-27

    This paper presents a chirp based ultrasonic positioning system (UPS) using orthogonal chirp waveforms. In the proposed method, multiple transmitters can simultaneously transmit chirp signals, as a result, it can efficiently utilize the entire available frequency spectrum. The fundamental idea behind the proposed multiple access scheme is to utilize the oversampling methodology of orthogonal frequency-division multiplexing (OFDM) modulation and orthogonality of the discrete frequency components of a chirp waveform. In addition, the proposed orthogonal chirp waveforms also have all the advantages of a classical chirp waveform. Firstly, the performance of the waveforms is investigated through correlation analysis and then, in an indoor environment, evaluated through simulations and experiments for ultrasonic (US) positioning. For an operational range of approximately 1000 mm, the positioning root-mean-square-errors (RMSEs) &90% error were 4.54 mm and 6.68 mm respectively.

  13. Three-Dimensional Orthogonal Co-ordinates

    ERIC Educational Resources Information Center

    Astin, J.

    1974-01-01

    A systematic approach to general orthogonal co-ordinates, suitable for use near the end of a beginning vector analysis course, is presented. It introduces students to tensor quantities and shows how equations and quantities needed in classical problems can be determined. (Author/LS)

  14. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  15. Reconstructing householder vectors from Tall-Skinny QR

    DOE PAGES

    Ballard, Grey Malone; Demmel, James; Grigori, Laura; ...

    2015-08-05

    The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstratemore » the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. Experiments on supercomputers demonstrate the benefits of the communication cost improvements: in particular, our experiments show substantial improvements over tuned library implementations for tall-and-skinny matrices. Furthermore, we also provide algorithmic improvements to the Householder QR and CAQR algorithms, and we investigate several alternatives to the Householder reconstruction algorithm that sacrifice guarantees on numerical stability in some cases in order to obtain higher performance.« less

  16. Direct recovery of regional tracer kinetics from temporally inconsistent dynamic ECT projections using dimension-reduced time-activity basis

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-11-01

    We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.

  17. Orthogonal recursive bisection as data decomposition strategy for massively parallel cardiac simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J

    2011-06-01

    We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.

  18. Multimode Bose-Hubbard model for quantum dipolar gases in confined geometries

    NASA Astrophysics Data System (ADS)

    Cartarius, Florian; Minguzzi, Anna; Morigi, Giovanna

    2017-06-01

    We theoretically consider ultracold polar molecules in a wave guide. The particles are bosons: They experience a periodic potential due to an optical lattice oriented along the wave guide and are polarized by an electric field orthogonal to the guide axis. The array is mechanically unstable by opening the transverse confinement in the direction orthogonal to the polarizing electric field and can undergo a transition to a double-chain (zigzag) structure. For this geometry we derive a multimode generalized Bose-Hubbard model for determining the quantum phases of the gas at the mechanical instability, taking into account the quantum fluctuations in all directions of space. Our model limits the dimension of the numerically relevant Hilbert subspace by means of an appropriate decomposition of the field operator, which is obtained from a field theoretical model of the linear-zigzag instability. We determine the phase diagrams of small systems using exact diagonalization and find that, even for tight transverse confinement, the aspect ratio between the two transverse trap frequencies controls not only the classical but also the quantum properties of the ground state in a nontrivial way. Convergence tests at the linear-zigzag instability demonstrate that our multimode generalized Bose-Hubbard model can catch the essential features of the quantum phases of dipolar gases in confined geometries with a limited computational effort.

  19. Nonlinear analysis of gait kinematics to track changes in oxygen consumption in prolonged load carriage walking: a pilot study.

    PubMed

    Schiffman, Jeffrey M; Chelidze, David; Adams, Albert; Segala, David B; Hasselquist, Leif

    2009-09-18

    Linking human mechanical work to physiological work for the purpose of developing a model of physical fatigue is a complex problem that cannot be solved easily by conventional biomechanical analysis. The purpose of the study was to determine if two nonlinear analysis methods can address the fundamental issue of utilizing kinematic data to track oxygen consumption from a prolonged walking trial: we evaluated the effectiveness of dynamical systems and fractal analysis in this study. Further, we selected, oxygen consumption as a measure to represent the underlying physiological measure of fatigue. Three male US Army Soldier volunteers (means: 23.3 yr; 1.80 m; 77.3 kg) walked for 120 min at 1.34 m/s with a 40-kg load on a level treadmill. Gait kinematic data and oxygen consumption (VO(2)) data were collected over the 120-min period. For the fractal analysis, utilizing stride interval data, we calculated fractal dimension. For the dynamical systems analysis, kinematic angle time series were used to estimate phase space warping based features at uniform time intervals: smooth orthogonal decomposition (SOD) was used to extract slowly time-varying trends from these features. Estimated fractal dimensions showed no apparent trend or correlation with independently measured VO(2). While inter-individual difference did exist in the VO(2) data, dominant SOD time trends tracked and correlated with the VO(2) for all volunteers. Thus, dynamical systems analysis using gait kinematics may be suitable to develop a model to predict physiologic fatigue based on biomechanical work.

  20. One-Channel Surface Electromyography Decomposition for Muscle Force Estimation.

    PubMed

    Sun, Wentao; Zhu, Jinying; Jiang, Yinlai; Yokoi, Hiroshi; Huang, Qiang

    2018-01-01

    Estimating muscle force by surface electromyography (sEMG) is a non-invasive and flexible way to diagnose biomechanical diseases and control assistive devices such as prosthetic hands. To estimate muscle force using sEMG, a supervised method is commonly adopted. This requires simultaneous recording of sEMG signals and muscle force measured by additional devices to tune the variables involved. However, recording the muscle force of the lost limb of an amputee is challenging, and the supervised method has limitations in this regard. Although the unsupervised method does not require muscle force recording, it suffers from low accuracy due to a lack of reference data. To achieve accurate and easy estimation of muscle force by the unsupervised method, we propose a decomposition of one-channel sEMG signals into constituent motor unit action potentials (MUAPs) in two steps: (1) learning an orthogonal basis of sEMG signals through reconstruction independent component analysis; (2) extracting spike-like MUAPs from the basis vectors. Nine healthy subjects were recruited to evaluate the accuracy of the proposed approach in estimating muscle force of the biceps brachii. The results demonstrated that the proposed approach based on decomposed MUAPs explains more than 80% of the muscle force variability recorded at an arbitrary force level, while the conventional amplitude-based approach explains only 62.3% of this variability. With the proposed approach, we were also able to achieve grip force control of a prosthetic hand, which is one of the most important clinical applications of the unsupervised method. Experiments on two trans-radial amputees indicated that the proposed approach improves the performance of the prosthetic hand in grasping everyday objects.

  1. Orthogonal system of fractural and integrated diagnostic features in vibration analysis

    NASA Astrophysics Data System (ADS)

    Kostyukov, V. N.; Boychenko, S. N.

    2017-08-01

    The paper presents the results obtained in the studies of the orthogonality of the vibration diagnostic features system comprising the integrated features, particularly - root mean square values of vibration acceleration, vibration velocity, vibration displacement and fractal feature (Hurst exponent). To diagnose the condition of the equipment by the vibration signal, the orthogonality of the vibration diagnostic features is important. The fact of orthogonality shows that the system of features is not superfluous and allows the maximum coverage of the state space of the object being diagnosed. This, in turn, increases reliability of the machinery condition monitoring results. The studies were carried out on the models of vibration signals using the programming language R.

  2. Understanding software faults and their role in software reliability modeling

    NASA Technical Reports Server (NTRS)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.

  3. Comparison of linear measurements between CBCT orthogonally synthesized cephalograms and conventional cephalograms

    PubMed Central

    Yang, S; Liu, D G

    2014-01-01

    Objectives: The purposes of the study are to investigate the consistency of linear measurements between CBCT orthogonally synthesized cephalograms and conventional cephalograms and to evaluate the influence of different magnifications on these comparisons based on a simulation algorithm. Methods: Conventional cephalograms and CBCT scans were taken on 12 dry skulls with spherical metal markers. Orthogonally synthesized cephalograms were created from CBCT data. Linear parameters on both cephalograms were measured via Photoshop CS v. 5.0 (Adobe® Systems, San Jose, CA), named measurement group (MG). Bland–Altman analysis was utilized to assess the agreement of two imaging modalities. Reproducibility was investigated using paired t-test. By a specific mathematical programme “cepha”, corresponding linear parameters [mandibular corpus length (Go-Me), mandibular ramus length (Co-Go), posterior facial height (Go-S)] on these two types of cephalograms were calculated, named simulation group (SG). Bland–Altman analysis was used to assess the agreement between MG and SG. Simulated linear measurements with varying magnifications were generated based on “cepha” as well. Bland–Altman analysis was used to assess the agreement of simulated measurements between two modalities. Results: Bland–Altman analysis suggested the agreement between measurements on conventional cephalograms and orthogonally synthesized cephalograms, with a mean bias of 0.47 mm. Comparison between MG and SG showed that the difference did not reach clinical significance. The consistency between simulated measurements of both modalities with four different magnifications was demonstrated. Conclusions: Normative data of conventional cephalograms could be used for CBCT orthogonally synthesized cephalograms during this transitional period. PMID:25029593

  4. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    PubMed

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  5. Hidden discriminative features extraction for supervised high-order time series modeling.

    PubMed

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Analysis of broadcasting satellite service feeder link power control and polarization

    NASA Technical Reports Server (NTRS)

    Sullivan, T. M.

    1982-01-01

    Statistical analyses of carrier to interference power ratios (C/Is) were performed in assessing 17.5 GHz feeder links using (1) fixed power and power control, and (2) orthogonal linear and orthogonal circular polarizations. The analysis methods and attenuation/depolarization data base were based on CCIR findings to the greatest possible extent. Feeder links using adaptive power control were found to neither cause or suffer significant C/I degradation relative to that for fixed power feeder links having similar or less stringent availability objectives. The C/Is for sharing between orthogonal linearly polarized feeder links were found to be significantly higher than those for circular polarization only in links to nominally colocated satellites from nominally colocated Earth stations in high attenuation environments.

  7. New insights into the crowd characteristics in Mina

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Weng, W. G.; Zhang, X. L.

    2014-11-01

    The significance of the study of the characteristics of crowd behavior is indubitable for safely organizing mass activities. There is insufficient material to conduct such research. In this paper, the Mina crowd disaster is quantitatively re-investigated. Its instantaneous velocity field is extracted from video material based on the cross-correlation algorithm. The properties of the stop-and-go waves, including fluctuation frequencies, wave propagation speeds, characteristic speeds, and time and space averaged velocity variances, are analyzed in detail. Thus, the database of the stop-and-go wave features is enriched, which is very important to crowd studies. The ‘turbulent’ flows are investigated with the proper orthogonal decomposition (POD) method which is widely used in fluid mechanics. And time series and spatial analysis are conducted to investigate the characteristics of the ‘turbulent’ flows. In this paper, the coherent structures and movement process are described by the POD method. The relationship between the jamming point and crowd path is analyzed. And the pressure buffer recognized in this paper is consistent with Helbing's high-pressure region. The results revealed here may be helpful for facilities design, modeling crowded scenarios and the organization of large-scale mass activities.

  8. Eggplant fruit composition as affected by the cultivation environment and genetic constitution.

    PubMed

    San José, Raquel; Sánchez-Mata, María-Cortes; Cámara, Montaña; Prohens, Jaime

    2014-10-01

    No comprehensive reports exist on the combined effects of season, cultivation environment and genotype on eggplant (Solanum melongena) composition. We studied proximate composition, carbohydrates, total phenolics and vitamin C of eggplant fruits of three Spanish landraces, three commercial hybrids and three hybrids between landraces cultivated across two environmental conditions (open field, OF; and, greenhouse, GH) for up to four seasons. Season (S) had a larger effect than the genotype (G) for composition traits, except for total phenolics. G × S interaction was generally of low relative magnitude. Orthogonal decomposition of the season effect showed that differences within OF or GH environments were in many instances greater than those between OF and GH. Spanish landraces presented, on average, lower contents of total carbohydrates and starch and higher contents of total vitamin C, ascorbic acid, and total phenolics than commercial hybrids. Hybrids among landraces presented variable levels of heterosis for composition traits. Genotypes grown in the same season cluster together on the graph of multivariate principal components analysis. The cultivation environment has a major role in determining the composition of eggplant fruits. Environmental and genotypic differences can be exploited to obtain high quality eggplant fruits.

  9. Direct numerical simulation of turbulence in a bent pipe

    NASA Astrophysics Data System (ADS)

    Schlatter, Philipp; Noorani, Azad

    2013-11-01

    A series of direct numerical simulations of turbulent flow in a bent pipe is presented. The setup employs periodic (cyclic) boundary conditions in the axial direction, leading to a nominally infinitely long pipe. The discretisation is based on the high-order spectral element method, using the code Nek5000. Four different curvatures, defined as the ratio between pipe radius and coil radius, are considered: κ = 0 (straight), 0.01 (mild curvature), 0.1 and 0.3 (strong curvature), at bulk Reynolds numbers of up to 11700 (corresponding to Reτ = 360 in the straight pipe case). The result show the turbulence-reducing effect of the curvature (similar to rotation), leading close to relaminarisation in the inner side; the outer side, however, remains fully turbulent. Prpoer orthogonal decomposition (POD) is used to extract the dominant modes, in an effort to explain low-frequency switching of sides inside the pipe. A number of additional interesting features are explored, which include sub-straight and sub-laminar drag for specific choices of curvature and Reynolds number: In particular the case with sub-laminar drag is investigated further, and our analysis shows the existence of a spanwise wave in the bent pipe, which in fact leads to lower overall pressure drop.

  10. Selection of Mother Wavelet Functions for Multi-Channel EEG Signal Analysis during a Working Memory Task

    PubMed Central

    Al-Qazzaz, Noor Kamal; Hamid Bin Mohd Ali, Sawal; Ahmad, Siti Anom; Islam, Mohd Shabiul; Escudero, Javier

    2015-01-01

    We performed a comparative study to select the efficient mother wavelet (MWT) basis functions that optimally represent the signal characteristics of the electrical activity of the human brain during a working memory (WM) task recorded through electro-encephalography (EEG). Nineteen EEG electrodes were placed on the scalp following the 10–20 system. These electrodes were then grouped into five recording regions corresponding to the scalp area of the cerebral cortex. Sixty-second WM task data were recorded from ten control subjects. Forty-five MWT basis functions from orthogonal families were investigated. These functions included Daubechies (db1–db20), Symlets (sym1–sym20), and Coiflets (coif1–coif5). Using ANOVA, we determined the MWT basis functions with the most significant differences in the ability of the five scalp regions to maximize their cross-correlation with the EEG signals. The best results were obtained using “sym9” across the five scalp regions. Therefore, the most compatible MWT with the EEG signals should be selected to achieve wavelet denoising, decomposition, reconstruction, and sub-band feature extraction. This study provides a reference of the selection of efficient MWT basis functions. PMID:26593918

  11. Selection of Mother Wavelet Functions for Multi-Channel EEG Signal Analysis during a Working Memory Task.

    PubMed

    Al-Qazzaz, Noor Kamal; Bin Mohd Ali, Sawal Hamid; Ahmad, Siti Anom; Islam, Mohd Shabiul; Escudero, Javier

    2015-11-17

    We performed a comparative study to select the efficient mother wavelet (MWT) basis functions that optimally represent the signal characteristics of the electrical activity of the human brain during a working memory (WM) task recorded through electro-encephalography (EEG). Nineteen EEG electrodes were placed on the scalp following the 10-20 system. These electrodes were then grouped into five recording regions corresponding to the scalp area of the cerebral cortex. Sixty-second WM task data were recorded from ten control subjects. Forty-five MWT basis functions from orthogonal families were investigated. These functions included Daubechies (db1-db20), Symlets (sym1-sym20), and Coiflets (coif1-coif5). Using ANOVA, we determined the MWT basis functions with the most significant differences in the ability of the five scalp regions to maximize their cross-correlation with the EEG signals. The best results were obtained using "sym9" across the five scalp regions. Therefore, the most compatible MWT with the EEG signals should be selected to achieve wavelet denoising, decomposition, reconstruction, and sub-band feature extraction. This study provides a reference of the selection of efficient MWT basis functions.

  12. Dispersal of the Pearl River plume over continental shelf in summer

    NASA Astrophysics Data System (ADS)

    Chen, Zhaoyun; Gong, Wenping; Cai, Huayang; Chen, Yunzhen; Zhang, Heng

    2017-07-01

    Satellite images of turbidity were used to study the climatological, monthly, and typical snapshot distributions of the Pearl River plume over the shelf in summer from 2003 to 2016. These images show that the plume spreads offshore over the eastern shelf and is trapped near the coast over the western shelf. Eastward extension of the plume retreats from June to August. Monthly spatial variations of the plume are characterized by eastward spreading, westward spreading, or both. Time series of monthly plume area was quantified by applying the K-mean clustering method to identify the turbid plume water. Decomposition of the 14-year monthly turbidity data by the empirical orthogonal function (EOF) analysis isolated the 1st mode in both the eastward and westward spreading pattern as the time series closely related to the Pearl River discharge, and the 2nd mode with out-of-phase turbidity anomalies over the eastern and western shelves that is associated with the prevailing wind direction. Eight typical plume types were detected from the satellite snapshots. They are characterized by coastal jet, eastward offshore spreading, westward spreading, bidirectional spreading, bulge, isolated patch, offshore branch, and offshore filaments, respectively. Their possible mechanisms are discussed.

  13. Identifying patients with poststroke mild cognitive impairment by pattern recognition of working memory load-related ERP.

    PubMed

    Li, Xiaoou; Yan, Yuning; Wei, Wenshi

    2013-01-01

    The early detection of subjects with probable cognitive deficits is crucial for effective appliance of treatment strategies. This paper explored a methodology used to discriminate between evoked related potential signals of stroke patients and their matched control subjects in a visual working memory paradigm. The proposed algorithm, which combined independent component analysis and orthogonal empirical mode decomposition, was applied to extract independent sources. Four types of target stimulus features including P300 peak latency, P300 peak amplitude, root mean square, and theta frequency band power were chosen. Evolutionary multiple kernel support vector machine (EMK-SVM) based on genetic programming was investigated to classify stroke patients and healthy controls. Based on 5-fold cross-validation runs, EMK-SVM provided better classification performance compared with other state-of-the-art algorithms. Comparing stroke patients with healthy controls using the proposed algorithm, we achieved the maximum classification accuracies of 91.76% and 82.23% for 0-back and 1-back tasks, respectively. Overall, the experimental results showed that the proposed method was effective. The approach in this study may eventually lead to a reliable tool for identifying suitable brain impairment candidates and assessing cognitive function.

  14. Analysis of the Effects of Phase Noise and Frequency Offset in Orthogonal Frequency Division Multiplexing (OFDM) Systems

    DTIC Science & Technology

    2004-03-01

    Data Communication , http://www.iec.org/, last accessed December 2003. 13. Klaus Witrisal, “Orthogonal Frequency Division Multiplexing (OFDM) for...http://ieeexplore.ieee.org, last accessed 26 February 2003. 12. The International Engineering Consortium, Web Forum Tutorials, OFDM for Mobile

  15. Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.

    PubMed

    Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani

    2015-02-01

    The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  16. Variations in the expansion and shear scalars for dissipative fluids

    NASA Astrophysics Data System (ADS)

    Akram, A.; Ahmad, S.; Jami, A. Rehman; Sufyan, M.; Zahid, U.

    2018-04-01

    This work is devoted to the study of some dynamical features of spherical relativistic locally anisotropic stellar geometry in f(R) gravity. In this paper, a specific configuration of tanh f(R) cosmic model has been taken into account. The mass function through technique introduced by Misner-Sharp has been formulated and with the help of it, various fruitful relations are derived. After orthogonal decomposition of the Riemann tensor, the tanh modified structure scalars are calculated. The role of these tanh modified structure scalars (MSS) has been discussed through shear, expansion as well as Weyl scalar differential equations. The inhomogeneity factor has also been explored for the case of radiating viscous locally anisotropic spherical system and spherical dust cloud with and without constant Ricci scalar corrections.

  17. Extreme learning machine for reduced order modeling of turbulent geophysical flows.

    PubMed

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  18. Extreme learning machine for reduced order modeling of turbulent geophysical flows

    NASA Astrophysics Data System (ADS)

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  19. Mathematical model of compact type evaporator

    NASA Astrophysics Data System (ADS)

    Borovička, Martin; Hyhlík, Tomáš

    2018-06-01

    In this paper, development of the mathematical model for evaporator used in heat pump circuits is covered, with focus on air dehumidification application. Main target of this ad-hoc numerical model is to simulate heat and mass transfer in evaporator for prescribed inlet conditions and different geometrical parameters. Simplified 2D mathematical model is developed in MATLAB SW. Solvers for multiple heat and mass transfer problems - plate surface temperature, condensate film temperature, local heat and mass transfer coefficients, refrigerant temperature distribution, humid air enthalpy change are included as subprocedures of this model. An automatic procedure of data transfer is developed in order to use results of MATLAB model in more complex simulation within commercial CFD code. In the end, Proper Orthogonal Decomposition (POD) method is introduced and implemented into MATLAB model.

  20. Frequency-selective quantitation of short-echo time 1H magnetic resonance spectra

    NASA Astrophysics Data System (ADS)

    Poullet, Jean-Baptiste; Sima, Diana M.; Van Huffel, Sabine; Van Hecke, Paul

    2007-06-01

    Accurate and efficient filtering techniques are required to suppress large nuisance components present in short-echo time magnetic resonance (MR) spectra. This paper discusses two powerful filtering techniques used in long-echo time MR spectral quantitation, the maximum-phase FIR filter (MP-FIR) and the Hankel-Lanczos Singular Value Decomposition with Partial ReOrthogonalization (HLSVD-PRO), and shows that they can be applied to their more complex short-echo time spectral counterparts. Both filters are validated and compared through extensive simulations. Their properties are discussed. In particular, the capability of MP-FIR for dealing with macromolecular components is emphasized. Although this property does not make a large difference for long-echo time MR spectra, it can be important when quantifying short-echo time spectra.

  1. The relaxed-polar mechanism of locally optimal Cosserat rotations for an idealized nanoindentation and comparison with 3D-EBSD experiments

    NASA Astrophysics Data System (ADS)

    Fischle, Andreas; Neff, Patrizio; Raabe, Dierk

    2017-08-01

    The rotation {{polar}}(F) \\in {{SO}}(3) arises as the unique orthogonal factor of the right polar decomposition F = {{polar}}(F) U of a given invertible matrix F \\in {{GL}}^+(3). In the context of nonlinear elasticity Grioli (Boll Un Math Ital 2:252-255, 1940) discovered a geometric variational characterization of {{polar}}(F) as a unique energy-minimizing rotation. In preceding works, we have analyzed a generalization of Grioli's variational approach with weights (material parameters) μ > 0 and μ _c ≥ 0 (Grioli: μ = μ _c). The energy subject to minimization coincides with the Cosserat shear-stretch contribution arising in any geometrically nonlinear, isotropic and quadratic Cosserat continuum model formulated in the deformation gradient field F :=\

  2. Thermal decomposition characteristics of microwave liquefied rape straw residues using thermogravimetric analysis

    Treesearch

    Xingyan Huang; Cornelis F. De Hoop; Jiulong Xie; Chung-Yun Hse; Jinqiu Qi; Yuzhu Chen; Feng Li

    2017-01-01

    The thermal decomposition characteristics of microwave liquefied rape straw residues with respect to liquefaction condition and pyrolysis conversion were investigated using a thermogravimetric (TG) analyzer at the heating rates of 5, 20, 50 °C min-1. The hemicellulose decomposition peak was absent at the derivative thermogravimetric analysis (DTG...

  3. An improved algorithm for balanced POD through an analytic treatment of impulse response tails

    NASA Astrophysics Data System (ADS)

    Tu, Jonathan H.; Rowley, Clarence W.

    2012-06-01

    We present a modification of the balanced proper orthogonal decomposition (balanced POD) algorithm for systems with simple impulse response tails. In this new method, we use dynamic mode decomposition (DMD) to estimate the slowly decaying eigenvectors that dominate the long-time behavior of the direct and adjoint impulse responses. This is done using a new, low-memory variant of the DMD algorithm, appropriate for large datasets. We then formulate analytic expressions for the contribution of these eigenvectors to the controllability and observability Gramians. These contributions can be accounted for in the balanced POD algorithm by simply appending the impulse response snapshot matrices (direct and adjoint, respectively) with particular linear combinations of the slow eigenvectors. Aside from these additions to the snapshot matrices, the algorithm remains unchanged. By treating the tails analytically, we eliminate the need to run long impulse response simulations, lowering storage requirements and speeding up ensuing computations. To demonstrate its effectiveness, we apply this method to two examples: the linearized, complex Ginzburg-Landau equation, and the two-dimensional fluid flow past a cylinder. As expected, reduced-order models computed using an analytic tail match or exceed the accuracy of those computed using the standard balanced POD procedure, at a fraction of the cost.

  4. Multispectral photoacoustic decomposition with localized regularization for detecting targeted contrast agent

    NASA Astrophysics Data System (ADS)

    Tavakoli, Behnoosh; Chen, Ying; Guo, Xiaoyu; Kang, Hyun Jae; Pomper, Martin; Boctor, Emad M.

    2015-03-01

    Targeted contrast agents can improve the sensitivity of imaging systems for cancer detection and monitoring the treatment. In order to accurately detect contrast agent concentration from photoacoustic images, we developed a decomposition algorithm to separate photoacoustic absorption spectrum into components from individual absorbers. In this study, we evaluated novel prostate-specific membrane antigen (PSMA) targeted agents for imaging prostate cancer. Three agents were synthesized through conjugating PSMA-targeting urea with optical dyes ICG, IRDye800CW and ATTO740 respectively. In our preliminary PA study, dyes were injected in a thin wall plastic tube embedded in water tank. The tube was illuminated with pulsed laser light using a tunable Q-switch ND-YAG laser. PA signal along with the B-mode ultrasound images were detected with a diagnostic ultrasound probe in orthogonal mode. PA spectrums of each dye at 0.5 to 20 μM concentrations were estimated using the maximum PA signal extracted from images which are obtained at illumination wavelengths of 700nm-850nm. Subsequently, we developed nonnegative linear least square optimization method along with localized regularization to solve the spectral unmixing. The algorithm was tested by imaging mixture of those dyes. The concentration of each dye was estimated with about 20% error on average from almost all mixtures albeit the small separation between dyes spectrums.

  5. Planned Comparisons as Better Alternatives to ANOVA Omnibus Tests.

    ERIC Educational Resources Information Center

    Benton, Roberta L.

    Analyses of data are presented to illustrate the advantages of using a priori or planned comparisons rather than omnibus analysis of variance (ANOVA) tests followed by post hoc or posteriori testing. The two types of planned comparisons considered are planned orthogonal non-trend coding contrasts and orthogonal polynomial or trend contrast coding.…

  6. Analysis and Simple Circuit Design of Double Differential EMG Active Electrode.

    PubMed

    Guerrero, Federico Nicolás; Spinelli, Enrique Mario; Haberman, Marcelo Alejandro

    2016-06-01

    In this paper we present an analysis of the voltage amplifier needed for double differential (DD) sEMG measurements and a novel, very simple circuit for implementing DD active electrodes. The three-input amplifier that standalone DD active electrodes require is inherently different from a differential amplifier, and general knowledge about its design is scarce in the literature. First, the figures of merit of the amplifier are defined through a decomposition of its input signal into three orthogonal modes. This analysis reveals a mode containing EMG crosstalk components that the DD electrode should reject. Then, the effect of finite input impedance is analyzed. Because there are three terminals, minimum bounds for interference rejection ratios due to electrode and input impedance unbalances with two degrees of freedom are obtained. Finally, a novel circuit design is presented, including only a quadruple operational amplifier and a few passive components. This design is nearly as simple as the branched electrode and much simpler than the three instrumentation amplifier design, while providing robust EMG crosstalk rejection and better input impedance using unity gain buffers for each electrode input. The interference rejection limits of this input stage are analyzed. An easily replicable implementation of the proposed circuit is described, together with a parameter design guideline to adjust it to specific needs. The electrode is compared with the established alternatives, and sample sEMG signals are obtained, acquired on different body locations with dry contacts, successfully rejecting interference sources.

  7. Satellite imagery in the fight against Malaria, the case for Genetic Programming

    NASA Astrophysics Data System (ADS)

    Ssentongo, J. S.; Hines, E. L.

    The analysis of multi-temporal data is a critical issue in the field of remote sensing and presents a constant challenge The approach used here relies primarily on utilising a method commonly used in statistics and signal processing Empirical Orthogonal Function EOF analysis Normalized Difference Vegetation Index NDVI and Rainfall Estimate RFE satellite images pertaining to the Sub-Saharan Africa region were obtained The images are derived from the Advanced Very High Resolution Radiometer AVHRR on the United States National Oceanic and Atmospheric Administration NOAA polar orbiting satellites spanning from January 2000 to December 2002 The region of interest was narrowed down to the Limpopo Province Northern Province of South Africa EOF analyses of the space-time-intensity series of dekadal mean NDVI values was been performed They reveal that NDVI can be accurately approximated by its principal component time series and contains a near sinusoidal oscillation pattern Peak greenness essentially what NDVI measures seasons last approximately 8 weeks This oscillation period is very similar to that of Malaria cases reported in the same period but lags behind by 4 dekads about 40 days Singular Value Decomposition SVD of Coupled Fields is performed on the spacetime-intensity series of dekadal mean NDVI and RFE values Correlation analyses indicate that both Malaria and greenness appear to be dependant on rainfall the onset of their seasonal highs always following an arrival of rain There is a greater

  8. [Laser Raman spectrum analysis of carbendazim pesticide].

    PubMed

    Wang, Xiao-bin; Wu, Rui-mei; Liu, Mu-hua; Zhang, Lu-ling; Lin, Lei; Yan, Lin-yuan

    2014-06-01

    Raman signal of solid and liquid carbendazim pesticide was collected by laser Raman spectrometer. The acquired Raman spectrum signal of solid carbendazim was preprocessed by wavelet analysis method, and the optimal combination of wavelet denoising parameter was selected through mixed orthogonal test. The results showed that the best effect was got with signal to noise ratio (SNR) being 62.483 when db2 wavelet function was used, decomposition level was 2, the threshold option scheme was 'rigisure' and reset mode was 'sln'. According to the vibration mode of different functional groups, the de-noised Raman bands could be divided into 3 areas: 1 400-2 000, 700-1 400 and 200-700 cm(-1). And the de-noised Raman bands were assigned with and analyzed. The characteristic vibrational modes were gained in different ranges of wavenumbers. Strong Raman signals were observed in the Raman spectrum at 619, 725, 964, 1 022, 1 265, 1 274 and 1 478 cm(-1), respectively. These characteristic vibrational modes are characteristic Raman peaks of solid carbendazim pesticide. Find characteristic Raman peaks at 629, 727, 1 001, 1 219, 1 258 and 1 365 cm(-1) in Raman spectrum signal of liquid carbendazim. These characteristic peaks were basically tallies with the solid carbendazim. The results can provide basis for the rapid screening of pesticide residue in food and agricultural products based on Raman spectrum.

  9. Fish Pectoral Fin Hydrodynamics; Part III: Low Dimensional Models via POD Analysis

    NASA Astrophysics Data System (ADS)

    Bozkurttas, M.; Madden, P.

    2005-11-01

    The highly complex kinematics of the pectoral fin and the resulting hydrodynamics does not lend itself easily to analysis based on simple notions of pitching/heaving/paddling kinematics or lift/drag based propulsive mechanisms. A more inventive approach is needed to dissect the fin gait and gain insight into the hydrodynamic performance of the pectoral fin. The focus of the current work is on the hydrodynamics of the pectoral fin of a bluegill sunfish in steady forward motion. The 3D, time-dependent fin kinematics is obtained via a stereo-videographic technique. We employ proper orthogonal decomposition to extract the essential features of the fin gait and then use CFD to examine the hydrodynamics of simplified gaits synthesized from the POD modes. The POD spectrum shows that the first two, three and five POD modes capture 55%, 67%, and 80% of the motion respectively. The first three modes are in particular highly distinct: Mode-1 is a ``cupping'' motion where the fin cups forward as it is abducted; Mode-2 is an ``expansion'' motion where the fin expands to present a larger area during adduction and finally Mode-3 involves a ``spanwise flick'' of the dorsal edge of the fin. Numerical simulation of flow past fin gaits synthesized from these modes lead to insights into the mechanisms of thrust production; these are discussed in detail.

  10. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

  11. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis.

    PubMed

    Till, Kevin; Jones, Ben L; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B

    2016-01-01

    Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification.

  12. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis

    PubMed Central

    Till, Kevin; Jones, Ben L.; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B.

    2016-01-01

    Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification. PMID:27224653

  13. Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques

    DTIC Science & Technology

    2018-04-30

    Title: Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques Subject: Monthly Progress Report Period of...Resources: N/A TOTAL: $18,687 2 TECHNICAL STATUS REPORT Abstract The program goal is analysis of sea ice dynamical behavior using Koopman Mode Decompo...sition (KMD) techniques. The work in the program’s first month consisted of improvements to data processing code, inclusion of additional arctic sea ice

  14. Analysis of cured carbon-phenolic decomposition products to investigate the thermal decomposition of nozzle materials

    NASA Technical Reports Server (NTRS)

    Thompson, James M.; Daniel, Janice D.

    1989-01-01

    The development of a mass spectrometer/thermal analyzer/computer (MS/TA/Computer) system capable of providing simultaneous thermogravimetry (TG), differential thermal analysis (DTA), derivative thermogravimetry (DTG) and evolved gas detection and analysis (EGD and EGA) under both atmospheric and high pressure conditions is described. The combined system was used to study the thermal decomposition of the nozzle material that constitutes the throat of the solid rocket boosters (SRB).

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xiaojun; Lei, Guangtsai; Pan, Guangwen

    In this paper, the continuous operator is discretized into matrix forms by Galerkin`s procedure, using periodic Battle-Lemarie wavelets as basis/testing functions. The polynomial decomposition of wavelets is applied to the evaluation of matrix elements, which makes the computational effort of the matrix elements no more expensive than that of method of moments (MoM) with conventional piecewise basis/testing functions. A new algorithm is developed employing the fast wavelet transform (FWT). Owing to localization, cancellation, and orthogonal properties of wavelets, very sparse matrices have been obtained, which are then solved by the LSQR iterative method. This algorithm is also adaptive in thatmore » one can add at will finer wavelet bases in the regions where fields vary rapidly, without any damage to the system orthogonality of the wavelet basis functions. To demonstrate the effectiveness of the new algorithm, we applied it to the evaluation of frequency-dependent resistance and inductance matrices of multiple lossy transmission lines. Numerical results agree with previously published data and laboratory measurements. The valid frequency range of the boundary integral equation results has been extended two to three decades in comparison with the traditional MoM approach. The new algorithm has been integrated into the computer aided design tool, MagiCAD, which is used for the design and simulation of high-speed digital systems and multichip modules Pan et al. 29 refs., 7 figs., 6 tabs.« less

  16. Image processing to optimize wave energy converters

    NASA Astrophysics Data System (ADS)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  17. Predicting Near Edge X-ray Absorption Spectra with the Spin-Free Exact-Two-Component Hamiltonian and Orthogonality Constrained Density Functional Theory.

    PubMed

    Verma, Prakash; Derricotte, Wallace D; Evangelista, Francesco A

    2016-01-12

    Orthogonality constrained density functional theory (OCDFT) provides near-edge X-ray absorption (NEXAS) spectra of first-row elements within one electronvolt from experimental values. However, with increasing atomic number, scalar relativistic effects become the dominant source of error in a nonrelativistic OCDFT treatment of core-valence excitations. In this work we report a novel implementation of the spin-free exact-two-component (X2C) one-electron treatment of scalar relativistic effects and its combination with a recently developed OCDFT approach to compute a manifold of core-valence excited states. The inclusion of scalar relativistic effects in OCDFT reduces the mean absolute error of second-row elements core-valence excitations from 10.3 to 2.3 eV. For all the excitations considered, the results from X2C calculations are also found to be in excellent agreement with those from low-order spin-free Douglas-Kroll-Hess relativistic Hamiltonians. The X2C-OCDFT NEXAS spectra of three organotitanium complexes (TiCl4, TiCpCl3, TiCp2Cl2) are in very good agreement with unshifted experimental results and show a maximum absolute error of 5-6 eV. In addition, a decomposition of the total transition dipole moment into partial atomic contributions is proposed and applied to analyze the nature of the Ti pre-edge transitions in the three organotitanium complexes.

  18. Assessment of a new method for the analysis of decomposition gases of polymers by a combining thermogravimetric solid-phase extraction and thermal desorption gas chromatography mass spectrometry.

    PubMed

    Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H

    2014-08-08

    For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Thermal decomposition kinetics of hydrazinium cerium 2,3-Pyrazinedicarboxylate hydrate: a new precursor for CeO2.

    PubMed

    Premkumar, Thathan; Govindarajan, Subbiah; Coles, Andrew E; Wight, Charles A

    2005-04-07

    The thermal decomposition kinetics of N(2)H(5)[Ce(pyrazine-2,3-dicarboxylate)(2)(H(2)O)] (Ce-P) have been studied by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC), for the first time; TGA analysis reveals an oxidative decomposition process yielding CeO(2) as the final product with an activation energy of approximately 160 kJ mol(-1). This complex may be used as a precursor to fine particle cerium oxides due to its low temperature of decomposition.

  20. On the Possibility of Studying the Reactions of the Thermal Decomposition of Energy Substances by the Methods of High-Resolution Terahertz Spectroscopy

    NASA Astrophysics Data System (ADS)

    Vaks, V. L.; Domracheva, E. G.; Chernyaeva, M. B.; Pripolzin, S. I.; Revin, L. S.; Tretyakov, I. V.; Anfertyev, V. A.; Yablokov, A. A.; Lukyanenko, I. A.; Sheikov, Yu. V.

    2018-02-01

    We show prospects for using the method of high-resolution terahertz spectroscopy for a continuous analysis of the decomposition products of energy substances in the gas phase (including short-lived ones) in a wide temperature range. The experimental setup, which includes a terahertz spectrometer for studying the thermal decomposition reactions, is described. The results of analysis of the gaseous decomposition products of energy substances by the example of ammonium nitrate heated from room temperature to 167°C are presented.

  1. Application of the wavelet packet transform to vibration signals for surface roughness monitoring in CNC turning operations

    NASA Astrophysics Data System (ADS)

    García Plaza, E.; Núñez López, P. J.

    2018-01-01

    The wavelet packet transform method decomposes a time signal into several independent time-frequency signals called packets. This enables the temporary location of transient events occurring during the monitoring of the cutting processes, which is advantageous in monitoring condition and fault diagnosis. This paper proposes the monitoring of surface roughness using a single low cost sensor that is easily implemented in numerical control machine tools in order to make on-line decisions on workpiece surface finish quality. Packet feature extraction in vibration signals was applied to correlate the sensor signals to measured surface roughness. For the successful application of the WPT method, mother wavelets, packet decomposition level, and appropriate packet selection methods should be considered, but are poorly understood aspects in the literature. In this novel contribution, forty mother wavelets, optimal decomposition level, and packet reduction methods were analysed, as well as identifying the effective frequency range providing the best packet feature extraction for monitoring surface finish. The results show that mother wavelet biorthogonal 4.4 in decomposition level L3 with the fusion of the orthogonal vibration components (ax + ay + az) were the best option in the vibration signal and surface roughness correlation. The best packets were found in the medium-high frequency DDA (6250-9375 Hz) and high frequency ADA (9375-12500 Hz) ranges, and the feed acceleration component ay was the primary source of information. The packet reduction methods forfeited packets with relevant features to the signal, leading to poor results for the prediction of surface roughness. WPT is a robust vibration signal processing method for the monitoring of surface roughness using a single sensor without other information sources, satisfactory results were obtained in comparison to other processing methods with a low computational cost.

  2. Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.

    PubMed

    Punys, Vytenis; Maknickas, Ramunas

    2011-01-01

    Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.

  3. Turbulence and entrainment length scales in large wind farms.

    PubMed

    Andersen, Søren J; Sørensen, Jens N; Mikkelsen, Robert F

    2017-04-13

    A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  4. Turbulence and entrainment length scales in large wind farms

    PubMed Central

    2017-01-01

    A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265028

  5. POD-based constrained sensor placement and field reconstruction from noisy wind measurements: A perturbation study

    DOE PAGES

    Zhang, Zhongqiang; Yang, Xiu; Lin, Guang

    2016-04-14

    Sensor placement at the extrema of Proper Orthogonal Decomposition (POD) is efficient and leads to accurate reconstruction of the wind field from a limited number of measure- ments. In this paper we extend this approach of sensor placement and take into account measurement errors and detect possible malfunctioning sensors. We use the 48 hourly spa- tial wind field simulation data sets simulated using the Weather Research an Forecasting (WRF) model applied to the Maine Bay to evaluate the performances of our methods. Specifically, we use an exclusion disk strategy to distribute sensors when the extrema of POD modes are close.more » It turns out that this strategy can also reduce the error of recon- struction from noise measurements. Also, by a cross-validation technique, we successfully locate the malfunctioning sensors.« less

  6. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  7. Wavefront analysis from its slope data

    NASA Astrophysics Data System (ADS)

    Mahajan, Virendra N.; Acosta, Eva

    2017-08-01

    In the aberration analysis of a wavefront over a certain domain, the polynomials that are orthogonal over and represent balanced wave aberrations for this domain are used. For example, Zernike circle polynomials are used for the analysis of a circular wavefront. Similarly, the annular polynomials are used to analyze the annular wavefronts for systems with annular pupils, as in a rotationally symmetric two-mirror system, such as the Hubble space telescope. However, when the data available for analysis are the slopes of a wavefront, as, for example, in a Shack- Hartmann sensor, we can integrate the slope data to obtain the wavefront data, and then use the orthogonal polynomials to obtain the aberration coefficients. An alternative is to find vector functions that are orthogonal to the gradients of the wavefront polynomials, and obtain the aberration coefficients directly as the inner products of these functions with the slope data. In this paper, we show that an infinite number of vector functions can be obtained in this manner. We show further that the vector functions that are irrotational are unique and propagate minimum uncorrelated additive random noise from the slope data to the aberration coefficients.

  8. Thermodynamics of the general diffusion process: Equilibrium supercurrent and nonequilibrium driven circulation with dissipation

    NASA Astrophysics Data System (ADS)

    Qian, H.

    2015-07-01

    Unbalanced probability circulation, which yields cyclic motions in phase space, is the defining characteristics of a stationary diffusion process without detailed balance. In over-damped soft matter systems, such behavior is a hallmark of the presence of a sustained external driving force accompanied with dissipations. In an under-damped and strongly correlated system, however, cyclic motions are often the consequences of a conservative dynamics. In the present paper, we give a novel interpretation of a class of diffusion processes with stationary circulation in terms of a Maxwell-Boltzmann equilibrium in which cyclic motions are on the level set of stationary probability density function thus non-dissipative, e.g., a supercurrent. This implies an orthogonality between stationary circulation J ss ( x) and the gradient of stationary probability density f ss ( x) > 0. A sufficient and necessary condition for the orthogonality is a decomposition of the drift b( x) = j( x) + D( x)∇φ( x) where ∇ṡ j( x) = 0 and j( x) ṡ∇φ( x) = 0. Stationary processes with such Maxwell-Boltzmann equilibrium has an underlying conservative dynamics , and a first integral ϕ( x) ≡ -ln f ss (x) = const, akin to a Hamiltonian system. At all time, an instantaneous free energy balance equation exists for a given diffusion system; and an extended energy conservation law among an entire family of diffusion processes with different parameter α can be established via a Helmholtz theorem. For the general diffusion process without the orthogonality, a nonequilibrium cycle emerges, which consists of external driven φ-ascending steps and spontaneous φ-descending movements, alternated with iso-φ motions. The theory presented here provides a rich mathematical narrative for complex mesoscopic dynamics, with contradistinction to an earlier one [H. Qian et al., J. Stat. Phys. 107, 1129 (2002)]. This article is supplemented with comments by H. Ouerdane and a final reply by the author.

  9. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  10. Orthogonal model and experimental data for analyzing wood-fiber-based tri-axial ribbed structural panels in bending

    Treesearch

    Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai

    2017-01-01

    This paper presents an analysis of 3-dimensional engineered structural panels (3DESP) made from wood-fiber-based laminated paper composites. Since the existing models for calculating the mechanical behavior of core configurations within sandwich panels are very complex, a new simplified orthogonal model (SOM) using an equivalent element has been developed. This model...

  11. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    NASA Astrophysics Data System (ADS)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  12. Asymmetric flow field flow fractionation with light scattering detection - an orthogonal sensitivity analysis.

    PubMed

    Galyean, Anne A; Filliben, James J; Holbrook, R David; Vreeland, Wyatt N; Weinberg, Howard S

    2016-11-18

    Asymmetric flow field flow fractionation (AF 4 ) has several instrumental factors that may have a direct effect on separation performance. A sensitivity analysis was applied to ascertain the relative importance of AF 4 primary instrument factor settings for the separation of a complex environmental sample. The analysis evaluated the impact of instrumental factors namely, cross flow, ramp time, focus flow, injection volume, and run buffer concentration on the multi-angle light scattering measurement of natural organic matter (NOM) molar mass (MM). A 2 (5-1) orthogonal fractional factorial design was used to minimize analysis time while preserving the accuracy and robustness in the determination of the main effects and interactions between any two instrumental factors. By assuming that separations resulting in smaller MM measurements would be more accurate, the analysis produced a ranked list of effects estimates for factors and interactions of factors based on their relative importance in minimizing the MM. The most important and statistically significant AF 4 instrumental factors were buffer concentration and cross flow. The least important was ramp time. A parallel 2 (5-2) orthogonal fractional factorial design was also employed on five environmental factors for synthetic natural water samples containing silver nanoparticles (NPs), namely: NP concentration, NP size, NOM concentration, specific conductance, and pH. None of the water quality characteristic effects or interactions were found to be significant in minimizing the measured MM; however, the interaction between NP concentration and NP size was an important effect when considering NOM recovery. This work presents a structured approach for the rigorous assessment of AF 4 instrument factors and optimal settings for the separation of complex samples utilizing efficient orthogonal factional factorial design and appropriate graphical analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Orthogonality catastrophe and fractional exclusion statistics

    NASA Astrophysics Data System (ADS)

    Ares, Filiberto; Gupta, Kumar S.; de Queiroz, Amilcar R.

    2018-02-01

    We show that the N -particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N -body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.

  14. Orthogonality catastrophe and fractional exclusion statistics.

    PubMed

    Ares, Filiberto; Gupta, Kumar S; de Queiroz, Amilcar R

    2018-02-01

    We show that the N-particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N-body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.

  15. Transportation Network Analysis and Decomposition Methods

    DOT National Transportation Integrated Search

    1978-03-01

    The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...

  16. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    PubMed

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  17. Critical analysis of nitramine decomposition data: Activation energies and frequency factors for HMX and RDX decomposition

    NASA Technical Reports Server (NTRS)

    Schroeder, M. A.

    1980-01-01

    A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.

  18. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  19. Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry

    NASA Astrophysics Data System (ADS)

    Griff Freeman, R.; McCurdy, David L.

    1998-08-01

    A shortcoming of many undergraduate classes in analytical chemistry is that students receive little exposure to sample preparation in chemical analysis. This paper reports the progress made in introducing microwave sample decomposition into several quantitative analysis experiments at Truman State University. Two experiments being performed in our current laboratory rotation include closed vessel microwave decomposition applied to the classical gravimetric determination of nickel and the determination of sodium in snack foods by flame atomic emission spectrometry. A third lab, using open-vessel microwave decomposition for the Kjeldahl nitrogen determination is now ready for student trial. Microwave decomposition reduces the time needed to complete these experiments and significantly increases the student awareness of the importance of sample preparation in quantitative chemical analyses, providing greater breadth and realism in the experiments.

  20. Reynolds number effect on airfoil wake structures under pitching and heaving motion

    NASA Astrophysics Data System (ADS)

    Kim, Kyung Chun; Karbasian, Hamidreza; ExpTENsys Team

    2017-11-01

    Detached Eddy Simulation (DES) and particle image velocimetry (PIV) measurements were performed to investigate the wake flow characteristics of an airfoil under pitching and heaving motion. A NACA0012 airfoil was selected for the numerical simulation and experiments were carried out in a wind tunnel and a water tunnel at Reynolds number of 15,000 and 90,000, respectively. The airfoil oscillated around an axis located 1/4 distance from the leading edge chord. Two different angles of attack, 20° and 30°, were selected with +/-10° maximum amplitude of oscillation. In order to extract the coherent flow structures from time-resolved PIV data, proper orthogonal decomposition (POD) analysis was performed on 1,000 instantaneous realisations for each condition using the method of snapshots. Vorticity contour and velocity profiles for both PIV and DES results are in good agreement for pitching and heaving motion. At high Reynolds number, 3D stream-wise vortices appeared after generating span-wise vortices. The higher maximum angle of attack allows the leading edge vortex to grow stronger and that the angle of attack appears to be more important in influencing the growth of the leading edge vortex structure than the reduced frequency. National Research Foundation of Korea (No. 2011-0030013).

  1. Dimensionality reduction of collective motion by principal manifolds

    NASA Astrophysics Data System (ADS)

    Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.

    2015-01-01

    While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.

  2. Low Dimensional Study of a Supersonic Multi-Stream Jet Flow

    NASA Astrophysics Data System (ADS)

    Tenney, Andrew; Berry, Matthew; Aycock-Rizzo, Halley; Glauser, Mark; Lewalle, Jacques

    2017-11-01

    In this study, the near field of a two stream supersonic jet flow is examined using low dimensional tools. The flow issues from a multi-stream nozzle as described in A near-field investigation of a supersonic, multi-stream jet: locating turbulence mechanisms through velocity and density measurements by Magstadt et al., with the bulk flow Mach number, M1, being 1.6, and the second stream Mach number, M2, reaching the sonic condition. The flow field is visualized using Particle Image Velocimetry (PIV), with frames captured at a rate of 4Hz. Time-resolved pressure measurements are made just aft of the nozzle exit, as well as in the far-field, 86.6 nozzle hydraulic diameters away from the exit plane. The methodologies used in the analysis of this flow include Proper Orthogonal Decomposition (POD), and the continuous wavelet transform. The results from this ``no deck'' case are then compared to those found in the study conducted by Berry et al. From this comparison, we draw conclusions about the effects of the presence of an aft deck on the low dimensional flow description, and near field spectral content. Supported by AFOSR Grant FA9550-15-1-0435, and AFRL, through an SBIR Grant with Spectral Energies, LLC.

  3. Data-adaptive harmonic analysis and prediction of sea level change in North Atlantic region

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2017-12-01

    This study aims to characterize North Atlantic sea level variability across the temporal and spatial scales. We apply recently developed data-adaptive Harmonic Decomposition (DAH) and Multilayer Stuart-Landau Models (MSLM) stochastic modeling techniques [Chekroun and Kondrashov, 2017] to monthly 1993-2017 dataset of Combined TOPEX/Poseidon, Jason-1 and Jason-2/OSTM altimetry fields over North Atlantic region. The key numerical feature of the DAH relies on the eigendecomposition of a matrix constructed from time-lagged spatial cross-correlations. In particular, eigenmodes form an orthogonal set of oscillating data-adaptive harmonic modes (DAHMs) that come in pairs and in exact phase quadrature for a given temporal frequency. Furthermore, the pairs of data-adaptive harmonic coefficients (DAHCs), obtained by projecting the dataset onto associated DAHMs, can be very efficiently modeled by a universal parametric family of simple nonlinear stochastic models - coupled Stuart-Landau oscillators stacked per frequency, and synchronized across different frequencies by the stochastic forcing. Despite the short record of altimetry dataset, developed DAH-MSLM model provides for skillful prediction of key dynamical and statistical features of sea level variability. References M. D. Chekroun and D. Kondrashov, Data-adaptive harmonic spectra and multilayer Stuart-Landau models. HAL preprint, 2017, https://hal.archives-ouvertes.fr/hal-01537797

  4. Development of a Reduced-Order Model for Reacting Gas-Solids Flow using Proper Orthogonal Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDaniel, Dwayne; Dulikravich, George; Cizmas, Paul

    2017-11-27

    This report summarizes the objectives, tasks and accomplishments made during the three year duration of this research project. The report presents the results obtained by applying advanced computational techniques to develop reduced-order models (ROMs) in the case of reacting multiphase flows based on high fidelity numerical simulation of gas-solids flow structures in risers and vertical columns obtained by the Multiphase Flow with Interphase eXchanges (MFIX) software. The research includes a numerical investigation of reacting and non-reacting gas-solids flow systems and computational analysis that will involve model development to accelerate the scale-up process for the design of fluidization systems by providingmore » accurate solutions that match the full-scale models. The computational work contributes to the development of a methodology for obtaining ROMs that is applicable to the system of gas-solid flows. Finally, the validity of the developed ROMs is evaluated by comparing the results against those obtained using the MFIX code. Additionally, the robustness of existing POD-based ROMs for multiphase flows is improved by avoiding non-physical solutions of the gas void fraction and ensuring that the reduced kinetics models used for reactive flows in fluidized beds are thermodynamically consistent.« less

  5. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  6. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  7. Automatic Modulation Classification of Common Communication and Pulse Compression Radar Waveforms using Cyclic Features

    DTIC Science & Technology

    2013-03-01

    intermediate frequency LFM linear frequency modulation MAP maximum a posteriori MATLAB® matrix laboratory ML maximun likelihood OFDM orthogonal frequency...spectrum, frequency hopping, and orthogonal frequency division multiplexing ( OFDM ) modulations. Feature analysis would be a good research thrust to...determine feature relevance and decide if removing any features improves performance. Also, extending the system for simulations using a MIMO receiver or

  8. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  9. Effects of anthropogenic heavy metal contamination on litter decomposition in streams - A meta-analysis.

    PubMed

    Ferreira, Verónica; Koricheva, Julia; Duarte, Sofia; Niyogi, Dev K; Guérold, François

    2016-03-01

    Many streams worldwide are affected by heavy metal contamination, mostly due to past and present mining activities. Here we present a meta-analysis of 38 studies (reporting 133 cases) published between 1978 and 2014 that reported the effects of heavy metal contamination on the decomposition of terrestrial litter in running waters. Overall, heavy metal contamination significantly inhibited litter decomposition. The effect was stronger for laboratory than for field studies, likely due to better control of confounding variables in the former, antagonistic interactions between metals and other environmental variables in the latter or differences in metal identity and concentration between studies. For laboratory studies, only copper + zinc mixtures significantly inhibited litter decomposition, while no significant effects were found for silver, aluminum, cadmium or zinc considered individually. For field studies, coal and metal mine drainage strongly inhibited litter decomposition, while drainage from motorways had no significant effects. The effect of coal mine drainage did not depend on drainage pH. Coal mine drainage negatively affected leaf litter decomposition independently of leaf litter identity; no significant effect was found for wood decomposition, but sample size was low. Considering metal mine drainage, arsenic mines had a stronger negative effect on leaf litter decomposition than gold or pyrite mines. Metal mine drainage significantly inhibited leaf litter decomposition driven by both microbes and invertebrates, independently of leaf litter identity; no significant effect was found for microbially driven decomposition, but sample size was low. Overall, mine drainage negatively affects leaf litter decomposition, likely through negative effects on invertebrates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Kinetic analysis of overlapping multistep thermal decomposition comprising exothermic and endothermic processes: thermolysis of ammonium dinitramide.

    PubMed

    Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N

    2017-01-25

    This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.

  11. Spectral decomposition of seismic data with reassigned smoothed pseudo Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyang; Liu, Tianyou

    2009-07-01

    Seismic signals are nonstationary mainly due to absorption and attenuation of seismic energy in strata. Referring to spectral decomposition of seismic data, the conventional method using short-time Fourier transform (STFT) limits temporal and spectral resolution by a predefined window length. Continuous-wavelet transform (CWT) uses dilation and translation of a wavelet to produce a time-scale map. However, the wavelets utilized should be orthogonal in order to obtain a satisfactory resolution. The less applied, Wigner-Ville distribution (WVD) being superior in energy distribution concentration, is confronted with cross-terms interference (CTI) when signals are multi-component. In order to reduce the impact of CTI, Cohen class uses kernel function as low-pass filter. Nevertheless it also weakens energy concentration of auto-terms. In this paper, we employ smoothed pseudo Wigner-Ville distribution (SPWVD) with Gauss kernel function to reduce CTI in time and frequency domain, then reassign values of SPWVD (called reassigned SPWVD) according to the center of gravity of the considering energy region so that distribution concentration is maintained simultaneously. We conduct the method above on a multi-component synthetic seismic record and compare with STFT and CWT spectra. Two field examples reveal that RSPWVD potentially can be applied to detect low-frequency shadows caused by hydrocarbons and to delineate the space distribution of abnormal geological body more precisely.

  12. Dynamical characteristics of an electromagnetic field under conditions of total reflection

    NASA Astrophysics Data System (ADS)

    Bekshaev, Aleksandr Ya

    2018-04-01

    The dynamical characteristics of electromagnetic fields include energy, momentum, angular momentum (spin) and helicity. We analyze their spatial distributions near the planar interface between two transparent and non-dispersive media, when the incident monochromatic plane wave with arbitrary polarization is totally reflected, and an evanescent wave is formed in the medium with lower optical density. Based on the recent arguments in favor of the Minkowski definition of the electromagnetic momentum in a material medium (Philbin 2011 Phys. Rev. A 83 013823; Philbin and Allanson 2012 86 055802; Bliokh et al 2017 Phys. Rev. Lett. 119 073901), we derive the explicit expressions for the dynamical characteristics in both media, with special attention to their behavior at the interface. In particular, the ‘extraordinary’ spin and momentum components orthogonal to the plane of incidence are described, and a canonical (spin-orbital) momentum decomposition is performed that contains no singular terms. The field energy, helicity, the spin momentum and orbital momentum components are everywhere regular but experience discontinuities at the interface; the spin components parallel to the interface appear to be continuous, which testifies to the consistency of the adopted Minkowski picture. The results supply a meaningful example of the electromagnetic momentum decomposition, with separation of spatial and polarization degrees of freedom, in inhomogeneous media, and can be used in engineering the structured fields designed for optical sorting, dispatching and micromanipulation.

  13. Reproducibility of Abdominal Aortic Aneurysm Diameter Measurement and Growth Evaluation on Axial and Multiplanar Computed Tomography Reformations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dugas, Alexandre; Therasse, Eric; Kauffmann, Claude

    2012-08-15

    Purpose: To compare different methods measuring abdominal aortic aneurysm (AAA) maximal diameter (Dmax) and its progression on multidetector computed tomography (MDCT) scan. Materials and Methods: Forty AAA patients with two MDCT scans acquired at different times (baseline and follow-up) were included. Three observers measured AAA diameters by seven different methods: on axial images (anteroposterior, transverse, maximal, and short-axis views) and on multiplanar reformation (MPR) images (coronal, sagittal, and orthogonal views). Diameter measurement and progression were compared over time for the seven methods. Reproducibility of measurement methods was assessed by intraclass correlation coefficient (ICC) and Bland-Altman analysis. Results: Dmax, as measuredmore » on axial slices at baseline and follow-up (FU) MDCTs, was greater than that measured using the orthogonal method (p = 0.046 for baseline and 0.028 for FU), whereas Dmax measured with the orthogonal method was greater those using all other measurement methods (p-value range: <0.0001-0.03) but anteroposterior diameter (p = 0.18 baseline and 0.10 FU). The greatest interobserver ICCs were obtained for the orthogonal and transverse methods (0.972) at baseline and for the orthogonal and sagittal MPR images at FU (0.973 and 0.977). Interobserver ICC of the orthogonal method to document AAA progression was greater (ICC = 0.833) than measurements taken on axial images (ICC = 0.662-0.780) and single-plane MPR images (0.772-0.817). Conclusion: AAA Dmax measured on MDCT axial slices overestimates aneurysm size. Diameter as measured by the orthogonal method is more reproducible, especially to document AAA progression.« less

  14. Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.

    ERIC Educational Resources Information Center

    Pham, Tuan Dinh; Mocks, Joachim

    1992-01-01

    Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)

  15. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  16. Asymptotic analysis of the density of states in random matrix models associated with a slowly decaying weight

    NASA Astrophysics Data System (ADS)

    Kuijlaars, A. B. J.

    2001-08-01

    The asymptotic behavior of polynomials that are orthogonal with respect to a slowly decaying weight is very different from the asymptotic behavior of polynomials that are orthogonal with respect to a Freud-type weight. While the latter has been extensively studied, much less is known about the former. Following an earlier investigation into the zero behavior, we study here the asymptotics of the density of states in a unitary ensemble of random matrices with a slowly decaying weight. This measure is also naturally connected with the orthogonal polynomials. It is shown that, after suitable rescaling, the weak limit is the same as the weak limit of the rescaled zeros.

  17. Dual-comb self-mode-locked monolithic Yb:KGW laser with orthogonal polarizations.

    PubMed

    Chang, M T; Liang, H C; Su, K W; Chen, Y F

    2015-04-20

    The dependence of lasing threshold on the output transmission is numerically analyzed to find the condition for the gain-to-loss balance for the orthogonal Np and Nm polarizations with a Ng-cut Yb:KGW laser crystal. With the numerical analysis, an orthogonally polarized dual-comb self-mode-locked operation is experimentally achieved with a coated Yb:KGW crystal to form a monolithic cavity. At a pump power of 5.2 W, the average output power, the pulse repetition rate, and the pulse duration are measured to be 0.24 (0.6) W, 25.8 (25.3) GHz, and 1.06 (1.12) ps for the output along the Np (Nm) polarization.

  18. A comparison of the gravity field over Central Europe from superconducting gravimeters, GRACE and global hydrological models, using EOF analysis

    NASA Astrophysics Data System (ADS)

    Crossley, David; de Linage, Caroline; Hinderer, Jacques; Boy, Jean-Paul; Famiglietti, James

    2012-05-01

    We analyse data from seven superconducting gravimeter (SG) stations in Europe from 2002 to 2007 from the Global Geodynamics Project (GGP) and compare seasonal variations with data from GRACE and several global hydrological models - GLDAS, WGHM and ERA-Interim. Our technique is empirical orthogonal function (EOF) decomposition of the fields that allows for the inherent incompatibility of length scales between ground and satellite observations. GGP stations below the ground surface pose a problem because part of the attraction from soil moisture comes from above the gravimeter, and this gives rise to a complex (mixed) gravity response. The first principle component (PC) of the EOF decomposition is the main indicator for comparing the fields, although for some of the series it accounts for only about 50 per cent of the variance reduction. PCs for GRACE solutions RL04 from CSR and GFZ are filtered with a cosine taper (degrees 20-40) and a Gaussian window (350 km). Significant differences are evident between GRACE solutions from different groups and filters, though they all agree reasonably well with the global hydrological models for the predominantly seasonal signal. We estimate the first PC at 10-d sampling to be accurate to 1 μGal for GGP data, 1.5 μGal for GRACE data and 1 μGal between the three global hydrological models. Within these limits the CNES/GRGS solution and ground GGP data agree at the 79 per cent level, and better when the GGP solution is restricted to the three above-ground stations. The major limitation on the GGP side comes from the water mass distribution surrounding the underground instruments that leads to a complex gravity effect. To solve this we propose a method for correcting the SG residual gravity series for the effects of soil moisture above the station.

  19. Extending substructure based iterative solvers to multiple load and repeated analyses

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1993-01-01

    Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.

  20. Physico-Geometrical Kinetics of Solid-State Reactions in an Undergraduate Thermal Analysis Laboratory

    ERIC Educational Resources Information Center

    Koga, Nobuyoshi; Goshi, Yuri; Yoshikawa, Masahiro; Tatsuoka, Tomoyuki

    2014-01-01

    An undergraduate kinetic experiment of the thermal decomposition of solids by microscopic observation and thermal analysis was developed by investigating a suitable reaction, applicable techniques of thermal analysis and microscopic observation, and a reliable kinetic calculation method. The thermal decomposition of sodium hydrogen carbonate is…

  1. Mathematics of Computed Tomography

    NASA Astrophysics Data System (ADS)

    Hawkins, William Grant

    A review of the applications of the Radon transform is presented, with emphasis on emission computed tomography and transmission computed tomography. The theory of the 2D and 3D Radon transforms, and the effects of attenuation for emission computed tomography are presented. The algebraic iterative methods, their importance and limitations are reviewed. Analytic solutions of the 2D problem the convolution and frequency filtering methods based on linear shift invariant theory, and the solution of the circular harmonic decomposition by integral transform theory--are reviewed. The relation between the invisible kernels, the inverse circular harmonic transform, and the consistency conditions are demonstrated. The discussion and review are extended to the 3D problem-convolution, frequency filtering, spherical harmonic transform solutions, and consistency conditions. The Cormack algorithm based on reconstruction with Zernike polynomials is reviewed. An analogous algorithm and set of reconstruction polynomials is developed for the spherical harmonic transform. The relations between the consistency conditions, boundary conditions and orthogonal basis functions for the 2D projection harmonics are delineated and extended to the 3D case. The equivalence of the inverse circular harmonic transform, the inverse Radon transform, and the inverse Cormack transform is presented. The use of the number of nodes of a projection harmonic as a filter is discussed. Numerical methods for the efficient implementation of angular harmonic algorithms based on orthogonal functions and stable recursion are presented. The derivation of a lower bound for the signal-to-noise ratio of the Cormack algorithm is derived.

  2. Time-Gated Orthogonal Scanning Automated Microscopy (OSAM) for High-speed Cell Detection and Analysis

    NASA Astrophysics Data System (ADS)

    Lu, Yiqing; Xi, Peng; Piper, James A.; Huo, Yujing; Jin, Dayong

    2012-11-01

    We report a new development of orthogonal scanning automated microscopy (OSAM) incorporating time-gated detection to locate rare-event organisms regardless of autofluorescent background. The necessity of using long-lifetime (hundreds of microseconds) luminescent biolabels for time-gated detection implies long integration (dwell) time, resulting in slow scan speed. However, here we achieve high scan speed using a new 2-step orthogonal scanning strategy to realise on-the-fly time-gated detection and precise location of 1-μm lanthanide-doped microspheres with signal-to-background ratio of 8.9. This enables analysis of a 15 mm × 15 mm slide area in only 3.3 minutes. We demonstrate that detection of only a few hundred photoelectrons within 100 μs is sufficient to distinguish a target event in a prototype system using ultraviolet LED excitation. Cytometric analysis of lanthanide labelled Giardia cysts achieved a signal-to-background ratio of two orders of magnitude. Results suggest that time-gated OSAM represents a new opportunity for high-throughput background-free biosensing applications.

  3. Blind separation of positive sources by globally convergent gradient search.

    PubMed

    Oja, Erkki; Plumbley, Mark

    2004-09-01

    The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.

  4. The Thermal Decomposition of Basic Copper(II) Sulfate.

    ERIC Educational Resources Information Center

    Tanaka, Haruhiko; Koga, Nobuyoshi

    1990-01-01

    Discussed is the preparation of synthetic brochantite from solution and a thermogravimetric-differential thermal analysis study of the thermal decomposition of this compound. Other analyses included are chemical analysis and IR spectroscopy. Experimental procedures and results are presented. (CW)

  5. On Stable Wall Boundary Conditions for the Hermite Discretization of the Linearised Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Sarna, Neeraj; Torrilhon, Manuel

    2018-01-01

    We define certain criteria, using the characteristic decomposition of the boundary conditions and energy estimates, which a set of stable boundary conditions for a linear initial boundary value problem, involving a symmetric hyperbolic system, must satisfy. We first use these stability criteria to show the instability of the Maxwell boundary conditions proposed by Grad (Commun Pure Appl Math 2(4):331-407, 1949). We then recognise a special block structure of the moment equations which arises due to the recursion relations and the orthogonality of the Hermite polynomials; the block structure will help us in formulating stable boundary conditions for an arbitrary order Hermite discretization of the Boltzmann equation. The formulation of stable boundary conditions relies upon an Onsager matrix which will be constructed such that the newly proposed boundary conditions stay close to the Maxwell boundary conditions at least in the lower order moments.

  6. Many-level multilevel structural equation modeling: An efficient evaluation strategy.

    PubMed

    Pritikin, Joshua N; Hunter, Michael D; von Oertzen, Timo; Brick, Timothy R; Boker, Steven M

    2017-01-01

    Structural equation models are increasingly used for clustered or multilevel data in cases where mixed regression is too inflexible. However, when there are many levels of nesting, these models can become difficult to estimate. We introduce a novel evaluation strategy, Rampart, that applies an orthogonal rotation to the parts of a model that conform to commonly met requirements. This rotation dramatically simplifies fit evaluation in a way that becomes more potent as the size of the data set increases. We validate and evaluate the implementation using a 3-level latent regression simulation study. Then we analyze data from a state-wide child behavioral health measure administered by the Oklahoma Department of Human Services. We demonstrate the efficiency of Rampart compared to other similar software using a latent factor model with a 5-level decomposition of latent variance. Rampart is implemented in OpenMx, a free and open source software.

  7. Non-fragile ?-? control for discrete-time stochastic nonlinear systems under event-triggered protocols

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Ding, Derui; Zhang, Sunjie; Wei, Guoliang; Liu, Hongjian

    2018-07-01

    In this paper, the non-fragile ?-? control problem is investigated for a class of discrete-time stochastic nonlinear systems under event-triggered communication protocols, which determine whether the measurement output should be transmitted to the controller or not. The main purpose of the addressed problem is to design an event-based output feedback controller subject to gain variations guaranteeing the prescribed disturbance attenuation level described by the ?-? performance index. By utilizing the Lyapunov stability theory combined with S-procedure, a sufficient condition is established to guarantee both the exponential mean-square stability and the ?-? performance for the closed-loop system. In addition, with the help of the orthogonal decomposition, the desired controller parameter is obtained in terms of the solution to certain linear matrix inequalities. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed event-based controller design scheme.

  8. Generic simulation of multi-element ladar scanner kinematics in USU LadarSIM

    NASA Astrophysics Data System (ADS)

    Omer, David; Call, Benjamin; Pack, Robert; Fullmer, Rees

    2006-05-01

    This paper presents a generic simulation model for a ladar scanner with up to three scan elements, each having a steering, stabilization and/or pattern-scanning role. Of interest is the development of algorithms that automatically generate commands to the scan elements given beam-steering objectives out of the ladar aperture, and the base motion of the sensor platform. First, a straight-forward single-element body-fixed beam-steering methodology is presented. Then a unique multi-element redirective and reflective space-fixed beam-steering methodology is explained. It is shown that standard direction cosine matrix decomposition methods fail when using two orthogonal, space-fixed rotations, thus demanding the development of a new algorithm for beam steering. Finally, a related steering control methodology is presented that uses two separate optical elements mathematically combined to determine the necessary scan element commands. Limits, restrictions, and results on this methodology are presented.

  9. A nonlinear quality-related fault detection approach based on modified kernel partial least squares.

    PubMed

    Jiao, Jianfang; Zhao, Ning; Wang, Guang; Yin, Shen

    2017-01-01

    In this paper, a new nonlinear quality-related fault detection method is proposed based on kernel partial least squares (KPLS) model. To deal with the nonlinear characteristics among process variables, the proposed method maps these original variables into feature space in which the linear relationship between kernel matrix and output matrix is realized by means of KPLS. Then the kernel matrix is decomposed into two orthogonal parts by singular value decomposition (SVD) and the statistics for each part are determined appropriately for the purpose of quality-related fault detection. Compared with relevant existing nonlinear approaches, the proposed method has the advantages of simple diagnosis logic and stable performance. A widely used literature example and an industrial process are used for the performance evaluation for the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. The direct and inverse problems of an air-saturated poroelastic cylinder submitted to acoustic radiation

    NASA Astrophysics Data System (ADS)

    Ogam, Erick; Fellah, Z. E. A.

    2011-09-01

    A wave-fluid saturated poroelastic structure interaction model based on the modified Biot theory (MBT) and plane-wave decomposition using orthogonal cylindrical functions is developed. The model is employed to recover from real data acquired in an anechoic chamber, the poromechanical properties of a soft cellular melamine cylinder submitted to an audible acoustic radiation. The inverse problem of acoustic diffraction is solved by constructing the objective functional given by the total square of the difference between predictions from the MBT interaction model and diffracted field data from experiment. The faculty of retrieval of the intrinsic poromechanical parameters from the diffracted acoustic fields, indicate that a wave initially propagating in a light fluid (air) medium, is able to carry in the absence of mechanical excitation of the specimen, information on the macroscopic mechanical properties which depend on the microstructural and intrinsic properties of the solid phase.

  11. On iterative processes in the Krylov-Sonneveld subspaces

    NASA Astrophysics Data System (ADS)

    Ilin, Valery P.

    2016-10-01

    The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.

  12. The role of the frequency of constituents in compound words: evidence from Basque and Spanish.

    PubMed

    Duñabeitia, Jon Andoni; Perea, Manuel; Carreiras, Manuel

    2007-12-01

    Recent data from compound word processing suggests that compounds are recognized via their constituent lexemes (Juhasz, Starr, Inhoff, & Placke, 2003). The present lexical decision experiment manipulated orthogonally the frequency of the constituents of compound words in two languages: Basque and Spanish. Basque and Spanish diverge widely in their morphological properties and in the number of existing compound words. Furthermore, the head lexeme (i.e., the most meaningful lexeme related to the whole-word meaning) in Spanish tends to be the second lexeme, whereas in Basque the percentage is more distributed. Results showed a facilitative effect of the frequency of the second lexeme, in both Basque and Spanish compounds. Thus, both Basque and Spanish readers decompose compounds into their constituents for lexical access, and this decomposition is carried out in a language-independent and blind-to-semantics manner. We examine the implications of these results for models of lexical access.

  13. Identification of reduced-order thermal therapy models using thermal MR images: theory and validation.

    PubMed

    Niu, Ran; Skliar, Mikhail

    2012-07-01

    In this paper, we develop and validate a method to identify computationally efficient site- and patient-specific models of ultrasound thermal therapies from MR thermal images. The models of the specific absorption rate of the transduced energy and the temperature response of the therapy target are identified in the reduced basis of proper orthogonal decomposition of thermal images, acquired in response to a mild thermal test excitation. The method permits dynamic reidentification of the treatment models during the therapy by recursively utilizing newly acquired images. Such adaptation is particularly important during high-temperature therapies, which are known to substantially and rapidly change tissue properties and blood perfusion. The developed theory was validated for the case of focused ultrasound heating of a tissue phantom. The experimental and computational results indicate that the developed approach produces accurate low-dimensional treatment models despite temporal and spatial noises in MR images and slow image acquisition rate.

  14. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  15. Flow turbulence topology in regular porous media: From macroscopic to microscopic scale with direct numerical simulation

    NASA Astrophysics Data System (ADS)

    Chu, Xu; Weigand, Bernhard; Vaikuntanathan, Visakh

    2018-06-01

    Microscopic analysis of turbulence topology in a regular porous medium is presented with a series of direct numerical simulation. The regular porous media are comprised of square cylinders in a staggered array. Triply periodic boundary conditions enable efficient investigations in a representative elementary volume. Three flow patterns—channel with sudden contraction, impinging surface, and wake—are observed and studied quantitatively in contrast to the qualitative experimental studies reported in the literature. Among these, shear layers in the channel show the highest turbulence intensity due to a favorable pressure gradient and shed due to an adverse pressure gradient downstream. The turbulent energy budget indicates a strong production rate after the flow contraction and a strong dissipation on both shear and impinging walls. Energy spectra and pre-multiplied spectra detect large scale energetic structures in the shear layer and a breakup of scales in the impinging layer. However, these large scale structures break into less energetic small structures at high Reynolds number conditions. This suggests an absence of coherent structures in densely packed porous media at high Reynolds numbers. Anisotropy analysis with a barycentric map shows that the turbulence in porous media is highly isotropic in the macro-scale, which is not the case in the micro-scale. In the end, proper orthogonal decomposition is employed to distinguish the energy-conserving structures. The results support the pore scale prevalence hypothesis. However, energetic coherent structures are observed in the case with sparsely packed porous media.

  16. Coherent flow structures and heat transfer in a duct with electromagnetic forcing

    NASA Astrophysics Data System (ADS)

    Himo, Rawad; Habchi, Charbel

    2018-04-01

    Coherent vortices are generated electromagnetically in a square duct flow. The vortices are induced by a Lorentz force applied in a small section near the entrance of the duct. The flow structure complexity increases with the electromagnetic forcing since the primary vortices propagating along the duct detach to generate secondary smaller streamwise vortices and hairpin-like structures. The Reynolds number based on the mean flow velocity and hydraulic diameter is 500, and five cases were studied by varying the electromagnetic forcing. Even though this Reynolds number is relatively low, a periodic sequence of hairpin-like structure flow was observed for the high forcing cases. This mechanism enhances the mixing process between the different flow regions resulting in an increase in the thermal performances which reaches 66% relative to the duct flow without forcing. In addition to the flow complexity, lower forcing cases remained steady, unlike high Lorentz forces that induced periodic instabilities with a Strouhal number around 0.59 for the transient eddies. The effect of the flow structure on the heat transfer is analyzed qualitatively and quantitatively using numerical simulations based on the finite volume method. Moreover, proper orthogonal decomposition (POD) analysis was performed on the flow structures to evaluate the most energetic modes contributing in the flow. It is found from the POD analysis that the primary streamwise vortices and hairpin legs are the flow structures that are the most contributing to the heat transfer process.

  17. Separation of spatial-temporal patterns ('climatic modes') by combined analysis of really measured and generated numerically vector time series

    NASA Astrophysics Data System (ADS)

    Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.

    2013-12-01

    The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  18. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    PubMed Central

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  19. On orthogonal expansions of the space of vector functions which are square-summable over a given domain and the vector analysis operators

    NASA Technical Reports Server (NTRS)

    Bykhovskiy, E. B.; Smirnov, N. V.

    1983-01-01

    The Hilbert space L2(omega) of vector functions is studied. A breakdown of L2(omega) into orthogonal subspaces is discussed and the properties of the operators for projection onto these subspaces are investigated from the standpoint of preserving the differential properties of the vectors being projected. Finally, the properties of the operators are examined.

  20. Hydrophilic Strong Anion Exchange (hSAX) Chromatography for Highly Orthogonal Peptide Separation of Complex Proteomes

    PubMed Central

    2013-01-01

    Due to its compatibility and orthogonality to reversed phase (RP) liquid chromatography (LC) separation, ion exchange chromatography, and mainly strong cation exchange (SCX), has often been the first choice in multidimensional LC experiments in proteomics. Here, we have tested the ability of three strong anion exchanger (SAX) columns differing in their hydrophobicity to fractionate RAW264.7 macrophage cell lysate. IonPac AS24, a strong anion exchange material with ultralow hydrophobicity, demonstrated to be superior to other materials by fractionation and separation of tryptic peptides from both a mixture of 6 proteins as well as mouse cell lysate. The chromatography displayed very high orthogonality and high robustness depending on the hydrophilicity of column chemistry, which we termed hydrophilic strong anion exchange (hSAX). Mass spectrometry analysis of 34 SAX fractions from RAW264.7 macrophage cell lysate digest resulted in an identification of 9469 unique proteins and 126318 distinct peptides in one week of instrument time. Moreover, when compared to an optimized high pH/low pH RP separation approach, the method presented here raised the identification of proteins and peptides by 10 and 28%, respectively. This novel hSAX approach provides robust, reproducible, and highly orthogonal separation of complex protein digest samples for deep coverage proteome analysis. PMID:23294059

  1. Microbial genomics, transcriptomics and proteomics: new discoveries in decomposition research using complementary methods.

    PubMed

    Baldrian, Petr; López-Mondéjar, Rubén

    2014-02-01

    Molecular methods for the analysis of biomolecules have undergone rapid technological development in the last decade. The advent of next-generation sequencing methods and improvements in instrumental resolution enabled the analysis of complex transcriptome, proteome and metabolome data, as well as a detailed annotation of microbial genomes. The mechanisms of decomposition by model fungi have been described in unprecedented detail by the combination of genome sequencing, transcriptomics and proteomics. The increasing number of available genomes for fungi and bacteria shows that the genetic potential for decomposition of organic matter is widespread among taxonomically diverse microbial taxa, while expression studies document the importance of the regulation of expression in decomposition efficiency. Importantly, high-throughput methods of nucleic acid analysis used for the analysis of metagenomes and metatranscriptomes indicate the high diversity of decomposer communities in natural habitats and their taxonomic composition. Today, the metaproteomics of natural habitats is of interest. In combination with advanced analytical techniques to explore the products of decomposition and the accumulation of information on the genomes of environmentally relevant microorganisms, advanced methods in microbial ecophysiology should increase our understanding of the complex processes of organic matter transformation.

  2. Synthesis and photophysical properties of a single bond linked tetracene dimer

    NASA Astrophysics Data System (ADS)

    Sun, Tingting; Shen, Li; Liu, Heyuan; Sun, Xuan; Li, Xiyou

    2016-07-01

    A tetracene dimer linked directly by a single bond has been successfully prepared by using electron withdrawing groups to improve the stability. The molecular structure of this dimer is characterized by 1H NMR, MALDI-TOF mass spectroscopy, and elemental analysis. The minimized molecular structure and X-ray crystallography reveal that the tetracene subunits of this dimer adopt an orthogonal configuration. Its absorption spectrum differs significantly from that of its monomeric counterpart, suggesting the presence of strong interactions between the two tetracene subunits. The excited state of this dimer is delocalized on both two tetracene subunits, which is significantly different from that of orthogonal anthracene dimers, but similar with that observed for orthogonal pentacene dimer. Most of the excited states of this dimer decay by radioactive channels, which is different from the localized twisted charge transfer state (LTCT) channel of anthracene dimers and the singlet fission (SF) channel of pentacene dimers. The results of this research suggest that similar orthogonal configurations caused different propertied for acene dimers with different conjugation length.

  3. Laser Ignition of Nitramine Composite Propellants and Crack Propagation and Branching in Burning Solid Propellants

    DTIC Science & Technology

    1987-10-01

    34 Proceedings of the 16th JANNAF Com- bustion Meeting, Sept. 1979, Vol. II, pp. 13-34. 44. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition...34 Proceedings of the 19th JANNAF Combustion Meeting, Oct. 1982. 47. Schroeder , M. A., " Critical Analysis of Nitramine Decomposition Data: Ac- tivation...the surface of the propellant. This is consis- tent with the decomposition mechanism considered by Boggs[48] and Schroeder [43J. They concluded that the

  4. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less

  5. Determination of seasonals using wavelets in terms of noise parameters changeability

    NASA Astrophysics Data System (ADS)

    Klos, Anna; Bogusz, Janusz; Figurski, Mariusz

    2015-04-01

    The reliable velocities of GNSS-derived observations are becoming of high importance nowadays. The fact on how we determine and subtract the seasonals may all cause the time series autocorrelation and affect uncertainties of linear parameters. The periodic changes in GNSS time series are commonly assumed as the sum of annual and semi-annual changes with amplitudes and phases being constant in time and the Least-Squares Estimation (LSE) is used in general to model these sine waves. However, not only seasonals' time-changeability, but also their higher harmonics should be considered. In this research, we focused on more than 230 globally distributed IGS stations that were processed at the Military University of Technology EPN Local Analysis Centre (MUT LAC) in Bernese 5.0 software. The network was divided into 7 different sub-networks with few of overlapping stations and processed separately with newest models. Here, we propose a wavelet-based trend and seasonals determination and removal of whole frequency spectrum between Chandler and quarter-annual periods from North, East and Up components and compare it with LSE-determined values. We used a Meyer symmetric, orthogonal wavelet and assumed nine levels of decomposition. The details from 6 up to 9 were analyzed here as periodic components with frequencies between 0.3-2.5 cpy. The characteristic oscillations for each of frequency band were pointed out. The details lower than 6 summed together with detrended approximation were considered as residua. The power spectral densities (PSDs) of original and decomposed data were stacked for North, East and Up components for each of sub-networks so as to show what power was removed with each of decomposition levels. Moreover, the noises that the certain frequency band follows (in terms of spectral indices of power-law dependencies) were estimated here using a spectral method and compared for all processed sub-networks. It seems, that lowest frequencies up to 0.7 cpy are characterized by lower spectral indices in comparison to higher ones being close to white noise. Basing on the fact, that decomposition levels overlap each other, the frequency-window choice becomes a main point in spectral index estimation. Our results were compared with those obtained by Maximum Likelihood Estimation (MLE) and possible differences as well as their impact on velocity uncertainties pointed out. The results show that the spectral indices estimated in time and frequency domains differ of 0.15 in maximum. Moreover, we compared the removed power basing on wavelet decomposition levels with the one subtracted with LSE, assuming the same periodicities. In comparison to LSE, the wavelet-based approach leaves the residua being closer to white noise with lower power-law amplitudes of them, what strictly reduces velocity uncertainties. The last approximation was analyzed here as long-term trend, being the non-linear and compared with LSE-determined linear one. It seems that these two trends differ at the level of 0.3 mm/yr in the most extreme case, what makes wavelet decomposition being useful for velocity determination.

  6. A Riemann-Hilbert approach to asymptotic questions for orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Deift, P.; Kriecherbauer, T.; McLaughlin, K. T.-R.; Venakides, S.; Zhou, X.

    2001-08-01

    A few years ago the authors introduced a new approach to study asymptotic questions for orthogonal polynomials. In this paper we give an overview of our method and review the results which have been obtained in Deift et al. (Internat. Math. Res. Notices (1997) 759, Comm. Pure Appl. Math. 52 (1999) 1491, 1335), Deift (Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, Vol. 3, New York University, 1999), Kriecherbauer and McLaughlin (Internat. Math. Res. Notices (1999) 299) and Baik et al. (J. Amer. Math. Soc. 12 (1999) 1119). We mainly consider orthogonal polynomials with respect to weights on the real line which are either (1) Freud-type weights d[alpha](x)=e-Q(x) dx (Q polynomial or Q(x)=x[beta], [beta]>0), or (2) varying weights d[alpha]n(x)=e-nV(x) dx (V analytic, limx-->[infinity] V(x)/logx=[infinity]). We obtain Plancherel-Rotach-type asymptotics in the entire complex plane as well as asymptotic formulae with error estimates for the leading coefficients, for the recurrence coefficients, and for the zeros of the orthogonal polynomials. Our proof starts from an observation of Fokas et al. (Comm. Math. Phys. 142 (1991) 313) that the orthogonal polynomials can be determined as solutions of certain matrix valued Riemann-Hilbert problems. We analyze the Riemann-Hilbert problems by a steepest descent type method introduced by Deift and Zhou (Ann. Math. 137 (1993) 295) and further developed in Deift and Zhou (Comm. Pure Appl. Math. 48 (1995) 277) and Deift et al. (Proc. Nat. Acad. Sci. USA 95 (1998) 450). A crucial step in our analysis is the use of the well-known equilibrium measure which describes the asymptotic distribution of the zeros of the orthogonal polynomials.

  7. Oblique rotaton in canonical correlation analysis reformulated as maximizing the generalized coefficient of determination.

    PubMed

    Satomura, Hironori; Adachi, Kohei

    2013-07-01

    To facilitate the interpretation of canonical correlation analysis (CCA) solutions, procedures have been proposed in which CCA solutions are orthogonally rotated to a simple structure. In this paper, we consider oblique rotation for CCA to provide solutions that are much easier to interpret, though only orthogonal rotation is allowed in the existing formulations of CCA. Our task is thus to reformulate CCA so that its solutions have the freedom of oblique rotation. Such a task can be achieved using Yanai's (Jpn. J. Behaviormetrics 1:46-54, 1974; J. Jpn. Stat. Soc. 11:43-53, 1981) generalized coefficient of determination for the objective function to be maximized in CCA. The resulting solutions are proved to include the existing orthogonal ones as special cases and to be rotated obliquely without affecting the objective function value, where ten Berge's (Psychometrika 48:519-523, 1983) theorems on suborthonormal matrices are used. A real data example demonstrates that the proposed oblique rotation can provide simple, easily interpreted CCA solutions.

  8. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  9. Coupling coefficients for tensor product representations of quantum SU(2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groenevelt, Wolter, E-mail: w.g.m.groenevelt@tudelft.nl

    2014-10-15

    We study tensor products of infinite dimensional irreducible {sup *}-representations (not corepresentations) of the SU(2) quantum group. We obtain (generalized) eigenvectors of certain self-adjoint elements using spectral analysis of Jacobi operators associated to well-known q-hypergeometric orthogonal polynomials. We also compute coupling coefficients between different eigenvectors corresponding to the same eigenvalue. Since the continuous spectrum has multiplicity two, the corresponding coupling coefficients can be considered as 2 × 2-matrix-valued orthogonal functions. We compute explicitly the matrix elements of these functions. The coupling coefficients can be considered as q-analogs of Bessel functions. As a results we obtain several q-integral identities involving q-hypergeometricmore » orthogonal polynomials and q-Bessel-type functions.« less

  10. Multiple-taper spectral analysis: A stand-alone C-subroutine

    NASA Astrophysics Data System (ADS)

    Lees, Jonathan M.; Park, Jeffrey

    1995-03-01

    A simple set of subroutines in ANSI-C are presented for multiple taper spectrum estimation. The multitaper approach provides an optimal spectrum estimate by minimizing spectral leakage while reducing the variance of the estimate by averaging orthogonal eigenspectrum estimates. The orthogonal tapers are Slepian nπ prolate functions used as tapers on the windowed time series. Because the taper functions are orthogonal, combining them to achieve an average spectrum does not introduce spurious correlations as standard smoothed single-taper estimates do. Furthermore, estimates of the degrees of freedom and F-test values at each frequency provide diagnostics for determining levels of confidence in narrow band (single frequency) periodicities. The program provided is portable and has been tested on both Unix and Macintosh systems.

  11. Finite Element Analysis Of Influence Of Flank Wear Evolution On Forces In Orthogonal Cutting Of 42CrMo4 Steel

    NASA Astrophysics Data System (ADS)

    Madajewski, Marek; Nowakowski, Zbigniew

    2017-01-01

    This paper presents analysis of flank wear influence on forces in orthogonal turning of 42CrMo4 steel and evaluates capacity of finite element model to provide such force values. Data about magnitude of feed and cutting force were obtained from measurements with force tensiometer in experimental test as well as from finite element analysis of chip formation process in ABAQUS/Explicit software. For studies an insert with complex rake face was selected and flank wear was simulated by grinding operation on its flank face. The aim of grinding inset surface was to obtain even flat wear along cutting edge, which after the measurement could be modeled with CAD program and applied in FE analysis for selected range of wear width. By comparing both sets of force values as function of flank wear in given cutting conditions FEA model was validated and it was established that it can be applied to analyze other physical aspects of machining. Force analysis found that progression of wear causes increase in cutting force magnitude and steep boost to feed force magnitude. Analysis of Fc/Ff force ratio revealed that flank wear has significant impact on resultant force in orthogonal cutting and magnitude of this force components in cutting and feed direction. Surge in force values can result in transfer of substantial loads to machine-tool interface.

  12. Characterization of a Heated Liquid Jet in Crossflow

    NASA Astrophysics Data System (ADS)

    Wiest, Heather K.

    The liquid jet in crossflow (LJICF) is a widely utilized fuel injection method for airbreathing propulsion devices such as low NO x gas turbine combustors, turbojet afterburners, scramjet/ramjet engines, and rotating detonation engines (RDE's). This flow field allows for efficient fuel-air mixing as aerodynamic forces from the crossflow augment atomization. Additionally, increases in the thermal demands of advanced aeroengines necessitates the use of fuel as a primary coolant. The resulting higher fuel temperatures can cause flash atomization of the liquid fuel as it is injected into a crossflow, potentially leading to a large reduction in the jet penetration. While many experimental works have characterized the overall atomization process of a room temperature liquid jet in an ambient temperature and pressure crossflow, the aggressive conditions associated with flash atomization especially in an air crossflow with elevated temperatures and pressures have been less studied in the community. A successful test campaign was conducted to study the effects of fuel temperature on a liquid jet injected transversely into a steady air crossflow at ambient as well as elevated temperature and pressure conditions. Modifications were made to an existing optically accessible rig, and a new fuel injector was designed for this study. Backlit imaging was utilized to record changes in the overall spray characteristics and jet trajectory as fuel temperature and crossflow conditioners were adjusted. Three primary analysis techniques were applied to the heated LJICF data: linear regression of detected edges to determine trajectory correlations, exploratory study of pixel intensity variations both temporally as well as spatially, and modal decomposition of the data. The overall objectives of this study was to assess the trajectory, breakup, and mixing of the LJICF undery varying jet and crossflow conditions, develop a trajectory correlation to predict changes in jet penetration due to fuel temperature increases, and characterize the changes in underlying physics in the LJICF flow field. Based on visual inspection, the increase in fuel temperature leads to a finer and denser fuel spray. With increasingly elevated liquid temperatures, the penetration of the jet typically decreases. At or near flashing conditions, the jet had a tendency to penetrate upstream before bending over in the crossflow as well as experiences a rapid expansion causing the jet column to increase in width. Two trajectory correlations were determined, one for each set of crossflow conditions, based on normalized axial distance, normalized liquid viscosity, and normalized jet diameter as liquid is vaporized. The pixel intensity analysis showed that the highest temperature jet in the ambient temperature and pressure crossflow exhibited periodic behavior that was also found using various modal techniques including proper orthogonal decomposition and dynamic mode decomposition. Dominant frequencies determined for most test cases were associated with the bulk or flapping motion of the jet. Most notably, the DMD analysis in this study was successful in identifying robust modes across different subgroupings of the data even though the modes identified were not the highest power modes in each DMD spectrum.

  13. [Relationships between decomposition rate of leaf litter and initial quality across the alpine timberline ecotone in Western Sichuan, China].

    PubMed

    Yang, Lin; Deng, Chang-chun; Chen Ya-mei; He, Run-lian; Zhang, Jian; Liu, Yang

    2015-12-01

    The relationships between litter decomposition rate and their initial quality of 14 representative plants in the alpine forest ecotone of western Sichuan were investigated in this paper. The decomposition rate k of the litter ranged from 0.16 to 1.70. Woody leaf litter and moss litter decomposed much slower, and shrubby litter decomposed a little faster. Then, herbaceous litters decomposed fastest among all plant forms. There were significant linear regression relationships between the litter decomposition rate and the N content, lignin content, phenolics content, C/N, C/P and lignin/N. Lignin/N and hemicellulose content could explain 78.4% variation of the litter decomposition rate (k) by path analysis. The lignin/N could explain 69.5% variation of k alone, and the direct path coefficient of lignin/N on k was -0.913. Principal component analysis (PCA) showed that the contribution rate of the first sort axis to k and the decomposition time (t) reached 99.2%. Significant positive correlations existed between lignin/N, lignin content, C/N, C/P and the first sort axis, and the closest relationship existed between lignin/N and the first sort axis (r = 0.923). Lignin/N was the key quality factor affecting plant litter decomposition rate across the alpine timberline ecotone, with the higher the initial lignin/N, the lower the decomposition rate of leaf litter.

  14. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  15. Relationship between the Decomposition Process of Coarse Woody Debris and Fungal Community Structure as Detected by High-Throughput Sequencing in a Deciduous Broad-Leaved Forest in Japan

    PubMed Central

    Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko

    2015-01-01

    We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605

  16. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  17. Doppler Global Velocimeter Development for the Large Wind Tunnels at Ames Research Center

    NASA Technical Reports Server (NTRS)

    Reinath, Michael S.

    1997-01-01

    Development of an optical, laser-based flow-field measurement technique for large wind tunnels is described. The technique uses laser sheet illumination and charged coupled device detectors to rapidly measure flow-field velocity distributions over large planar regions of the flow. Sample measurements are presented that illustrate the capability of the technique. An analysis of measurement uncertainty, which focuses on the random component of uncertainty, shows that precision uncertainty is not dependent on the measured velocity magnitude. For a single-image measurement, the analysis predicts a precision uncertainty of +/-5 m/s. When multiple images are averaged, this uncertainty is shown to decrease. For an average of 100 images, for example, the analysis shows that a precision uncertainty of +/-0.5 m/s can be expected. Sample applications show that vectors aligned with an orthogonal coordinate system are difficult to measure directly. An algebraic transformation is presented which converts measured vectors to the desired orthogonal components. Uncertainty propagation is then used to show how the uncertainty propagates from the direct measurements to the orthogonal components. For a typical forward-scatter viewing geometry, the propagation analysis predicts precision uncertainties of +/-4, +/-7, and +/-6 m/s, respectively, for the U, V, and W components at 68% confidence.

  18. Stabilization of the Thermal Decomposition of Poly(Propylene Carbonate) Through Copper Ion Incorporation and Use in Self-Patterning

    NASA Astrophysics Data System (ADS)

    Spencer, Todd J.; Chen, Yu-Chun; Saha, Rajarshi; Kohl, Paul A.

    2011-06-01

    Incorporation of copper ions into poly(propylene carbonate) (PPC) films cast from γ-butyrolactone (GBL), trichloroethylene (TCE) or methylene chloride (MeCl) solutions containing a photo-acid generator is shown to stabilize the PPC from thermal decomposition. Copper ions were introduced into the PPC mixtures by bringing the polymer mixture into contact with copper metal. The metal was oxidized and dissolved into the PPC mixture. The dissolved copper interferes with the decomposition mechanism of PPC, raising its decomposition temperature. Thermogravimetric analysis shows that copper ions make PPC more stable by up to 50°C. Spectroscopic analysis indicates that copper ions may stabilize terminal carboxylic acid groups, inhibiting PPC decomposition. The change in thermal stability based on PPC exposure to patterned copper substrates was used to provide a self-aligned patterning method for PPC on copper traces without the need for an additional photopatterning registration step. Thermal decomposition of PPC is then used to create air isolation regions around the copper traces. The spatial resolution of the self-patterning PPC process is limited by the lateral diffusion of the copper ions within the PPC. The concentration profiles of copper within the PPC, patterning resolution, and temperature effects on the PPC decomposition have been studied.

  19. Neural image analysis for estimating aerobic and anaerobic decomposition of organic matter based on the example of straw decomposition

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Nowakowski, K.; Slosarz, P.; Dach, J.; Pilarski, K.

    2012-04-01

    The purpose of the project was to identify the degree of organic matter decomposition by means of a neural model based on graphical information derived from image analysis. Empirical data (photographs of compost content at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009, Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial neural network, which demonstrates that the process is non-linear.

  20. THE SPITZER SURVEY OF STELLAR STRUCTURE IN GALAXIES (S{sup 4}G): MULTI-COMPONENT DECOMPOSITION STRATEGIES AND DATA RELEASE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salo, Heikki; Laurikainen, Eija; Laine, Jarkko

    The Spitzer Survey of Stellar Structure in Galaxies (S{sup 4}G) is a deep 3.6 and 4.5 μm imaging survey of 2352 nearby (<40 Mpc) galaxies. We describe the S{sup 4}G data analysis pipeline 4, which is dedicated to two-dimensional structural surface brightness decompositions of 3.6 μm images, using GALFIT3.0. Besides automatic 1-component Sérsic fits, and 2-component Sérsic bulge + exponential disk fits, we present human-supervised multi-component decompositions, which include, when judged appropriate, a central point source, bulge, disk, and bar components. Comparison of the fitted parameters indicates that multi-component models are needed to obtain reliable estimates for the bulge Sérsicmore » index and bulge-to-total light ratio (B/T), confirming earlier results. Here, we describe the preparations of input data done for decompositions, give examples of our decomposition strategy, and describe the data products released via IRSA and via our web page (www.oulu.fi/astronomy/S4G-PIPELINE4/MAIN). These products include all the input data and decomposition files in electronic form, making it easy to extend the decompositions to suit specific science purposes. We also provide our IDL-based visualization tools (GALFIDL) developed for displaying/running GALFIT-decompositions, as well as our mask editing procedure (MASK-EDIT) used in data preparation. A detailed analysis of the bulge, disk, and bar parameters derived from multi-component decompositions will be published separately.« less

  1. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  2. Decomposition of Copper (II) Sulfate Pentahydrate: A Sequential Gravimetric Analysis.

    ERIC Educational Resources Information Center

    Harris, Arlo D.; Kalbus, Lee H.

    1979-01-01

    Describes an improved experiment of the thermal dehydration of copper (II) sulfate pentahydrate. The improvements described here are control of the temperature environment and a quantitative study of the decomposition reaction to a thermally stable oxide. Data will suffice to show sequential gravimetric analysis. (Author/SA)

  3. Generalized decompositions of dynamic systems and vector Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  4. Origami interleaved tube cellular materials

    NASA Astrophysics Data System (ADS)

    Cheung, Kenneth C.; Tachi, Tomohiro; Calisch, Sam; Miura, Koryo

    2014-09-01

    A novel origami cellular material based on a deployable cellular origami structure is described. The structure is bi-directionally flat-foldable in two orthogonal (x and y) directions and is relatively stiff in the third orthogonal (z) direction. While such mechanical orthotropicity is well known in cellular materials with extruded two dimensional geometry, the interleaved tube geometry presented here consists of two orthogonal axes of interleaved tubes with high interfacial surface area and relative volume that changes with fold-state. In addition, the foldability still allows for fabrication by a flat lamination process, similar to methods used for conventional expanded two dimensional cellular materials. This article presents the geometric characteristics of the structure together with corresponding kinematic and mechanical modeling, explaining the orthotropic elastic behavior of the structure with classical dimensional scaling analysis.

  5. Performance Comparison of Orthogonal and Quasi-orthogonal Codes in Quasi-Synchronous Cellular CDMA Communication

    NASA Astrophysics Data System (ADS)

    Jos, Sujit; Kumar, Preetam; Chakrabarti, Saswat

    Orthogonal and quasi-orthogonal codes are integral part of any DS-CDMA based cellular systems. Orthogonal codes are ideal for use in perfectly synchronous scenario like downlink cellular communication. Quasi-orthogonal codes are preferred over orthogonal codes in the uplink communication where perfect synchronization cannot be achieved. In this paper, we attempt to compare orthogonal and quasi-orthogonal codes in presence of timing synchronization error. This will give insight into the synchronization demands in DS-CDMA systems employing the two classes of sequences. The synchronization error considered is smaller than chip duration. Monte-Carlo simulations have been carried out to verify the analytical and numerical results.

  6. TiO2 Immobilized on Manihot Carbon: Optimal Preparation and Evaluation of Its Activity in the Decomposition of Indigo Carmine

    PubMed Central

    Antonio-Cisneros, Cynthia M.; Dávila-Jiménez, Martín M.; Elizalde-González, María P.; García-Díaz, Esmeralda

    2015-01-01

    Applications of carbon-TiO2 materials have attracted attention in nanotechnology due to their synergic effects. We report the immobilization of TiO2 on carbon prepared from residues of the plant Manihot, commercial TiO2 and glycerol. The objective was to obtain a moderate loading of the anatase phase by preserving the carbonaceous external surface and micropores of the composite. Two preparation methods were compared, including mixing dry precursors and immobilization using a glycerol slurry. The evaluation of the micropore blocking was performed using nitrogen adsorption isotherms. The results indicated that it was possible to use Manihot residues and glycerol to prepare an anatase-containing material with a basic surface and a significant SBET value. The activities of the prepared materials were tested in a decomposition assay of indigo carmine. The TiO2/carbon eliminated nearly 100% of the dye under UV irradiation using the optimal conditions found by a Taguchi L4 orthogonal array considering the specific surface, temperature and initial concentration. The reaction was monitored by UV-Vis spectrophotometry and LC-ESI-(Qq)-TOF-MS, enabling the identification of some intermediates. No isatin-5-sulfonic acid was detected after a 60 min photocatalytic reaction, and three sulfonated aromatic amines, including 4-amino-3-hydroxybenzenesulfonic acid, 2-(2-amino-5-sulfophenyl)-2-oxoacetic acid and 2-amino-5-sulfobenzoic acid, were present in the reaction mixture. PMID:25588214

  7. Augmenting the decomposition of EMG signals using supervised feature extraction techniques.

    PubMed

    Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S

    2012-01-01

    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.

  8. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    PubMed Central

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  9. Evaluating litter decomposition and soil organic matter dynamics in earth system models: contrasting analysis of long-term litter decomposition and steady-state soil carbon

    NASA Astrophysics Data System (ADS)

    Bonan, G. B.; Wieder, W. R.

    2012-12-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.

  10. Application-Dedicated Selection of Filters (ADSF) using covariance maximization and orthogonal projection.

    PubMed

    Hadoux, Xavier; Kumar, Dinesh Kant; Sarossy, Marc G; Roger, Jean-Michel; Gorretta, Nathalie

    2016-05-19

    Visible and near-infrared (Vis-NIR) spectra are generated by the combination of numerous low resolution features. Spectral variables are thus highly correlated, which can cause problems for selecting the most appropriate ones for a given application. Some decomposition bases such as Fourier or wavelet generally help highlighting spectral features that are important, but are by nature constraint to have both positive and negative components. Thus, in addition to complicating the selected features interpretability, it impedes their use for application-dedicated sensors. In this paper we have proposed a new method for feature selection: Application-Dedicated Selection of Filters (ADSF). This method relaxes the shape constraint by enabling the selection of any type of user defined custom features. By considering only relevant features, based on the underlying nature of the data, high regularization of the final model can be obtained, even in the small sample size context often encountered in spectroscopic applications. For larger scale deployment of application-dedicated sensors, these predefined feature constraints can lead to application specific optical filters, e.g., lowpass, highpass, bandpass or bandstop filters with positive only coefficients. In a similar fashion to Partial Least Squares, ADSF successively selects features using covariance maximization and deflates their influences using orthogonal projection in order to optimally tune the selection to the data with limited redundancy. ADSF is well suited for spectroscopic data as it can deal with large numbers of highly correlated variables in supervised learning, even with many correlated responses. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Hydrological signals in height and gravity in northeastern Italy inferred from principal components analysis

    NASA Astrophysics Data System (ADS)

    Zerbini, S.; Raicich, F.; Richter, B.; Gorini, V.; Errico, M.

    2010-04-01

    This work describes a study of GPS heights, gravity and hydrological time series collected by stations located in northeastern Italy. During the last 12 years, changes in the long-term behaviors of the GPS heights and gravity time series are observed. In particular, starting in 2004-2005, a height increase is observed over the whole area. The temporal and spatial variability of these parameters has been studied as well as those of key hydrological variables, namely precipitation, hydrological balance and water table by using the Empirical Orthogonal Functions (EOF) analysis. The coupled variability between the GPS heights and the hydrological balance and precipitation data has been investigated by means of the Singular Value Decomposition (SVD) approach. Significant common patterns in the spatial and temporal variability of these parameters have been recognized. In particular, hydrology-induced variations are clearly observable starting in 2002-2003 in the southern part of the Po Plain for the longest time series, and from 2004-2005 over the whole area. These findings, obtained by means of purely mathematical approaches, are supported by sound physical interpretation suggesting that the climate-related fluctuations in the regional/local hydrological regime are one of the main contributors to the observed variations. A regional scale signal has been identified in the GPS station heights; it is characterized by the opposite behavior of the southern and northern stations in response to the hydrological forcing. At Medicina, in the southern Po Plain, the EOF analysis has shown a marked common signal between the GPS heights and the Superconducting Gravimeter (SG) data both over the long and the short period.

  12. Introducing Network Analysis into Science Education: Methodological Research Examining Secondary School Students' Understanding of "Decomposition"

    ERIC Educational Resources Information Center

    Schizas, Dimitrios; Katrana, Evagelia; Stamou, George

    2013-01-01

    In the present study we used the technique of word association tests to assess students' cognitive structures during the learning period. In particular, we tried to investigate what students living near a protected area in Greece (Dadia forest) knew about the phenomenon of decomposition. Decomposition was chosen as a stimulus word because it…

  13. Microbial ecological succession during municipal solid waste decomposition.

    PubMed

    Staley, Bryan F; de Los Reyes, Francis L; Wang, Ling; Barlaz, Morton A

    2018-04-28

    The decomposition of landfilled refuse proceeds through distinct phases, each defined by varying environmental factors such as volatile fatty acid concentration, pH, and substrate quality. The succession of microbial communities in response to these changing conditions was monitored in a laboratory-scale simulated landfill to minimize measurement difficulties experienced at field scale. 16S rRNA gene sequences retrieved at separate stages of decomposition showed significant succession in both Bacteria and methanogenic Archaea. A majority of Bacteria sequences in landfilled refuse belong to members of the phylum Firmicutes, while Proteobacteria levels fluctuated and Bacteroidetes levels increased as decomposition proceeded. Roughly 44% of archaeal sequences retrieved under conditions of low pH and high acetate were strictly hydrogenotrophic (Methanomicrobiales, Methanobacteriales). Methanosarcina was present at all stages of decomposition. Correspondence analysis showed bacterial population shifts were attributed to carboxylic acid concentration and solids hydrolysis, while archaeal populations were affected to a higher degree by pH. T-RFLP analysis showed specific taxonomic groups responded differently and exhibited unique responses during decomposition, suggesting that species composition and abundance within Bacteria and Archaea are highly dynamic. This study shows landfill microbial demographics are highly variable across both spatial and temporal transects.

  14. Disentangling Intracycle Interferences in Photoelectron Momentum Distributions Using Orthogonal Two-Color Laser Fields

    NASA Astrophysics Data System (ADS)

    Xie, Xinhua; Wang, Tian; Yu, ShaoGang; Lai, XuanYang; Roither, Stefan; Kartashov, Daniil; Baltuška, Andrius; Liu, XiaoJun; Staudte, André; Kitzler, Markus

    2017-12-01

    We use orthogonally polarized two-color (OTC) laser pulses to separate quantum paths in the multiphoton ionization of Ar atoms. Our OTC pulses consist of 400 and 800 nm light at a relative intensity ratio of 10 ∶1 . We find a hitherto unobserved interference in the photoelectron momentum distribution, which exhibits a strong dependence on the relative phase of the OTC pulse. Analysis of model calculations reveals that the interference is caused by quantum pathways from nonadjacent quarter cycles.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Tirtha; Vercauteren, Nikki; Muste, Marian

    Flume experiments with particle imaging velocimetry (PIV) were conducted recently to study a complex flow problem where wind shear acts on the surface of a static water body in presence of flexible emergent vegetation and induces a rich dynamics of wave–turbulence–vegetation interaction inside the water body without any gravitational gradient. The experiments were aimed at mimicking realistic vegetated wetlands and the present work is targeted to improve the understanding of the coherent structures associated with this interaction by employing a combination of techniques such as quadrant analysis, proper orthogonal decomposition (POD), Shannon entropy and mutual information content (MIC). The turbulentmore » transfer of momentum is found to be dominated by organized motions such as sweeps and ejections, while the wave component of vertical momentum transport does not show any such preference. Furthermore, by reducing the data using POD we see that wave energy for large flow depths and turbulent energy for all water depths is concentrated among the top few modes, which can allow development of simple reduced order models. Vegetation flexibility is found to induce several roll type structures, however if the vegetation density is increased, drag effects dominate over flexibility and organize the flow. The interaction between waves and turbulence is also found to be highest among flexible sparse vegetation. But, rapidly evolving parts of the flow such as the air–water interface reduces wave–turbulence interaction.« less

  16. POD Model Reconstruction for Gray-Box Fault Detection

    NASA Technical Reports Server (NTRS)

    Park, Han; Zak, Michail

    2007-01-01

    Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.

  17. Observation and analysis of in vivo vocal fold tissue instabilities produced by nonlinear source-filter coupling: A case studya

    PubMed Central

    Zañartu, Matías; Mehta, Daryush D.; Ho, Julio C.; Wodicka, George R.; Hillman, Robert E.

    2011-01-01

    Different source-related factors can lead to vocal fold instabilities and bifurcations referred to as voice breaks. Nonlinear coupling in phonation suggests that changes in acoustic loading can also be responsible for this unstable behavior. However, no in vivo visualization of tissue motion during these acoustically induced instabilities has been reported. Simultaneous recordings of laryngeal high-speed videoendoscopy, acoustics, aerodynamics, electroglottography, and neck skin acceleration are obtained from a participant consistently exhibiting voice breaks during pitch glide maneuvers. Results suggest that acoustically induced and source-induced instabilities can be distinguished at the tissue level. Differences in vibratory patterns are described through kymography and phonovibrography; measures of glottal area, open∕speed quotient, and amplitude∕phase asymmetry; and empirical orthogonal function decomposition. Acoustically induced tissue instabilities appear abruptly and exhibit irregular vocal fold motion after the bifurcation point, whereas source-induced ones show a smoother transition. These observations are also reflected in the acoustic and acceleration signals. Added aperiodicity is observed after the acoustically induced break, and harmonic changes appear prior to the bifurcation for the source-induced break. Both types of breaks appear to be subcritical bifurcations due to the presence of hysteresis and amplitude changes after the frequency jumps. These results are consistent with previous studies and the nonlinear source-filter coupling theory. PMID:21303014

  18. Response of a tethered aerostat to simulated turbulence

    NASA Astrophysics Data System (ADS)

    Stanney, Keith A.; Rahn, Christopher D.

    2006-09-01

    Aerostats are lighter-than-air vehicles tethered to the ground by a cable and used for broadcasting, communications, surveillance, and drug interdiction. The dynamic response of tethered aerostats subject to extreme atmospheric turbulence often dictates survivability. This paper develops a theoretical model that predicts the planar response of a tethered aerostat subject to atmospheric turbulence and simulates the response to 1000 simulated hurricane scale turbulent time histories. The aerostat dynamic model assumes the aerostat hull to be a rigid body with non-linear fluid loading, instantaneous weathervaning for planar response, and a continuous tether. Galerkin's method discretizes the coupled aerostat and tether partial differential equations to produce a non-linear initial value problem that is integrated numerically given initial conditions and wind inputs. The proper orthogonal decomposition theorem generates, based on Hurricane Georges wind data, turbulent time histories that possess the sequential behavior of actual turbulence, are spectrally accurate, and have non-Gaussian density functions. The generated turbulent time histories are simulated to predict the aerostat response to severe turbulence. The resulting probability distributions for the aerostat position, pitch angle, and confluence point tension predict the aerostat behavior in high gust environments. The dynamic results can be up to twice as large as a static analysis indicating the importance of dynamics in aerostat modeling. The results uncover a worst case wind input consisting of a two-pulse vertical gust.

  19. Modal analysis of annual runoff volume and sediment load in the Yangtze river-lake system for the period 1956-2013.

    PubMed

    Chen, Huai; Zhu, Lijun; Wang, Jianzhong; Fan, Hongxia; Wang, Zhihuan

    2017-07-01

    This study focuses on detecting trends in annual runoff volume and sediment load in the Yangtze river-lake system. Times series of annual runoff volume and sediment load at 19 hydrological gauging stations for the period 1956-2013 were collected. Based on the Mann-Kendall test at the 1% significance level, annual sediment loads in the Yangtze River, the Dongting Lake and the Poyang Lake were detected with significantly descending trends. The power spectrum estimation indicated predominant oscillations with periods of 8 and 20 years are embedded in the runoff volume series, probably related to the El Niño Southern Oscillation (2-7 years) and Pacific Decadal Oscillation (20-30 years). Based on dominant components (capturing more than roughly 90% total energy) extracted by the proper orthogonal decomposition method, total change ratios of runoff volume and sediment load during the last 58 years were evaluated. For sediment load, the mean CRT value in the Yangtze River is about -65%, and those in the Dongting Lake and the Poyang Lake are -92.2% and -87.9% respectively. Particularly, the CRT value of the sediment load in the channel inflow of the Dongting Lake is even -99.7%. The Three Gorges Dam has intercepted a large amount of sediment load and decreased the sediment load downstream.

  20. Degradation of folic acid wastewater by electro-Fenton with three-dimensional electrode and its kinetic study

    PubMed Central

    Xiaochao, Gu; Jin, Tian; Xiaoyun, Li; Bin, Zhou; Xujing, Zheng; Jin, Xu

    2018-01-01

    The three-dimensional electro-Fenton method was used in the folic acid wastewater pretreatment process. In this study, we researched the degradation of folic acid and the effects of different parameters such as the air sparging rate, current density, pH and reaction time on chemical oxygen demand (COD) removal in folic acid wastewater. A four-level and four-factor orthogonal test was designed and optimal reaction conditions to pretreat folic acid wastewater by three-dimensional electrode were determined: air sparge rate 0.75 l min−1, current density 10.26 mA cm−2, pH 5 and reaction time 90 min. Under these conditions, the removal of COD reached 94.87%. LC-MS results showed that the electro-Fenton method led to an initial folic acid decomposition into p-aminobenzoyl-glutamic acid (PGA) and xanthopterin (XA); then part of the XA was oxidized to pterine-6-carboxylic acid (PCA) and the remaining part of XA was converted to pterin and carbon dioxide. The kinetics analysis of the folic acid degradation process during pretreatment was carried out by using simulated folic acid wastewater, and it could be proved that the degradation of folic acid by using the three-dimensional electro-Fenton method was a second-order reaction process. This study provided a reference for industrial folic acid treatment. PMID:29410807

  1. Application of least median of squared orthogonal distance (LMD) and LMD-based reweighted least squares (RLS) methods on the stock-recruitment relationship

    NASA Astrophysics Data System (ADS)

    Wang, Yan-Jun; Liu, Qun

    1999-03-01

    Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.

  2. Compositions of orthogonal glutamyl-tRNA and aminoacyl-tRNA synthetase pairs and uses thereof

    DOEpatents

    Anderson, J Christopher [San Francisco, CA; Schultz, Peter G [La Jolla, CA; Santoro, Stephen [Cambridge, MA

    2009-05-05

    Compositions and methods of producing components of protein biosynthetic machinery that include glutamyl orthogonal tRNAs, glutamyl orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of glutamyl tRNAs/synthetases are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins using these orthogonal pairs.

  3. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael

  4. A decomposition model and voxel selection framework for fMRI analysis to predict neural response of visual stimuli.

    PubMed

    Raut, Savita V; Yadav, Dinkar M

    2018-03-28

    This paper presents an fMRI signal analysis methodology using geometric mean curve decomposition (GMCD) and mutual information-based voxel selection framework. Previously, the fMRI signal analysis has been conducted using empirical mean curve decomposition (EMCD) model and voxel selection on raw fMRI signal. The erstwhile methodology loses frequency component, while the latter methodology suffers from signal redundancy. Both challenges are addressed by our methodology in which the frequency component is considered by decomposing the raw fMRI signal using geometric mean rather than arithmetic mean and the voxels are selected from EMCD signal using GMCD components, rather than raw fMRI signal. The proposed methodologies are adopted for predicting the neural response. Experimentations are conducted in the openly available fMRI data of six subjects, and comparisons are made with existing decomposition models and voxel selection frameworks. Subsequently, the effect of degree of selected voxels and the selection constraints are analyzed. The comparative results and the analysis demonstrate the superiority and the reliability of the proposed methodology.

  5. Gene features selection for three-class disease classification via multiple orthogonal partial least square discriminant analysis and S-plot using microarray data.

    PubMed

    Yang, Mingxing; Li, Xiumin; Li, Zhibin; Ou, Zhimin; Liu, Ming; Liu, Suhuan; Li, Xuejun; Yang, Shuyu

    2013-01-01

    DNA microarray analysis is characterized by obtaining a large number of gene variables from a small number of observations. Cluster analysis is widely used to analyze DNA microarray data to make classification and diagnosis of disease. Because there are so many irrelevant and insignificant genes in a dataset, a feature selection approach must be employed in data analysis. The performance of cluster analysis of this high-throughput data depends on whether the feature selection approach chooses the most relevant genes associated with disease classes. Here we proposed a new method using multiple Orthogonal Partial Least Squares-Discriminant Analysis (mOPLS-DA) models and S-plots to select the most relevant genes to conduct three-class disease classification and prediction. We tested our method using Golub's leukemia microarray data. For three classes with subtypes, we proposed hierarchical orthogonal partial least squares-discriminant analysis (OPLS-DA) models and S-plots to select features for two main classes and their subtypes. For three classes in parallel, we employed three OPLS-DA models and S-plots to choose marker genes for each class. The power of feature selection to classify and predict three-class disease was evaluated using cluster analysis. Further, the general performance of our method was tested using four public datasets and compared with those of four other feature selection methods. The results revealed that our method effectively selected the most relevant features for disease classification and prediction, and its performance was better than that of the other methods.

  6. Critical Analysis of Nitramine Decomposition Data: Activation Energies and Frequency Factors for HMX and RDX Decomposition

    DTIC Science & Technology

    1985-09-01

    larger than the net energies of reaction for the same transitions ) represent energy needed for "freeing-up" of HMX or RDX molecules 70E. R. Lee, R. H...FACTORS FOR HMX AND RDX DECOMPOSITION Michael A. Schroeder DT!C .AECTE September 1985 SEP 3 0 8 * APPROVED FOR PUBUC RELEASE; DISTIR!UTION UNLIMITED. US...Final Activation Energies and Frequency Factors for HMX and RDX Decomposition b PERFORMING ORG. REPORT N, %1ER 7. AUTHOR(@) 6 CONTRACT OR GRANT NuMP

  7. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  8. GHGs and air pollutants embodied in China's international trade: Temporal and spatial index decomposition analysis.

    PubMed

    Liu, Zhengyan; Mao, Xianqiang; Song, Peng

    2017-01-01

    Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karaulanov, Todor; Savukov, Igor; Kim, Young Jin

    We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less

  10. Data-driven Inference and Investigation of Thermosphere Dynamics and Variations

    NASA Astrophysics Data System (ADS)

    Mehta, P. M.; Linares, R.

    2017-12-01

    This paper presents a methodology for data-driven inference and investigation of thermosphere dynamics and variations. The approach uses data-driven modal analysis to extract the most energetic modes of variations for neutral thermospheric species using proper orthogonal decomposition, where the time-independent modes or basis represent the dynamics and the time-depedent coefficients or amplitudes represent the model parameters. The data-driven modal analysis approach combined with sparse, discrete observations is used to infer amplitues for the dynamic modes and to calibrate the energy content of the system. In this work, two different data-types, namely the number density measurements from TIMED/GUVI and the mass density measurements from CHAMP/GRACE are simultaneously ingested for an accurate and self-consistent specification of the thermosphere. The assimilation process is achieved with a non-linear least squares solver and allows estimation/tuning of the model parameters or amplitudes rather than the driver. In this work, we use the Naval Research Lab's MSIS model to derive the most energetic modes for six different species, He, O, N2, O2, H, and N. We examine the dominant drivers of variations for helium in MSIS and observe that seasonal latitudinal variation accounts for about 80% of the dynamic energy with a strong preference of helium for the winter hemisphere. We also observe enhanced helium presence near the poles at GRACE altitudes during periods of low solar activity (Feb 2007) as previously deduced. We will also examine the storm-time response of helium derived from observations. The results are expected to be useful in tuning/calibration of the physics-based models.

  11. A novel coupling of noise reduction algorithms for particle flow simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.

    2016-09-15

    Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less

  12. Regular flow reversals in Rayleigh-Bénard convection in a horizontal magnetic field.

    PubMed

    Tasaka, Yuji; Igaki, Kazuto; Yanagisawa, Takatoshi; Vogt, Tobias; Zuerner, Till; Eckert, Sven

    2016-04-01

    Magnetohydrodynamic Rayleigh-Bénard convection was studied experimentally using a liquid metal inside a box with a square horizontal cross section and aspect ratio of five. Systematic flow measurements were performed by means of ultrasonic velocity profiling that can capture time variations of instantaneous velocity profiles. Applying a horizontal magnetic field organizes the convective motion into a flow pattern of quasi-two-dimensional rolls arranged parallel to the magnetic field. The number of rolls has the tendency to decrease with increasing Rayleigh number Ra and to increase with increasing Chandrasekhar number Q. We explored convection regimes in a parameter range, at 2×10^{3}

  13. Decompositions of the polyhedral product functor with applications to moment-angle complexes and related spaces

    PubMed Central

    Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.

    2009-01-01

    This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727

  14. Decompositions of the polyhedral product functor with applications to moment-angle complexes and related spaces.

    PubMed

    Bahri, A; Bendersky, M; Cohen, F R; Gitler, S

    2009-07-28

    This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.

  15. Three geographic decomposition approaches in transportation network analysis

    DOT National Transportation Integrated Search

    1980-03-01

    This document describes the results of research into the application of geographic decomposition techniques to practical transportation network problems. Three approaches are described for the solution of the traffic assignment problem. One approach ...

  16. Application of Decomposition to Transportation Network Analysis

    DOT National Transportation Integrated Search

    1976-10-01

    This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...

  17. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.

    PubMed

    Revathy, M; Saravanan, R

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  18. Isoconversional approach for non-isothermal decomposition of un-irradiated and photon-irradiated 5-fluorouracil.

    PubMed

    Mohamed, Hala Sh; Dahy, AbdelRahman A; Mahfouz, Refaat M

    2017-10-25

    Kinetic analysis for the non-isothermal decomposition of un-irradiated and photon-beam-irradiated 5-fluorouracil (5-FU) as anti-cancer drug, was carried out in static air. Thermal decomposition of 5-FU proceeds in two steps. One minor step in the temperature range of (270-283°C) followed by the major step in the temperature range of (285-360°C). The non-isothermal data for un-irradiated and photon-irradiated 5-FU were analyzed using linear (Tang) and non-linear (Vyazovkin) isoconversional methods. The results of the application of these free models on the present kinetic data showed quite a dependence of the activation energy on the extent of conversion. For un-irradiated 5-FU, the non-isothermal data analysis indicates that the decomposition is generally described by A3 and A4 modeles for the minor and major decomposition steps, respectively. For a photon-irradiated sample of 5-FU with total absorbed dose of 10Gy, the decomposition is controlled by A2 model throughout the coversion range. The activation energies calculated in case of photon-irradiated 5-FU were found to be lower compared to the values obtained from the thermal decomposition of the un-irradiated sample probably due to the formation of additional nucleation sites created by a photon-irradiation. The decomposition path was investigated by intrinsic reaction coordinate (IRC) at the B3LYP/6-311++G(d,p) level of DFT. Two transition states were involved in the process by homolytic rupture of NH bond and ring secession, respectively. Published by Elsevier B.V.

  19. Thermal Decomposition Behavior of Hydroxytyrosol (HT) in Nitrogen Atmosphere Based on TG-FTIR Methods.

    PubMed

    Tu, Jun-Ling; Yuan, Jiao-Jiao

    2018-02-13

    The thermal decomposition behavior of olive hydroxytyrosol (HT) was first studied using thermogravimetry (TG). Cracked chemical bond and evolved gas analysis during the thermal decomposition process of HT were also investigated using thermogravimetry coupled with infrared spectroscopy (TG-FTIR). Thermogravimetry-Differential thermogravimetry (TG-DTG) curves revealed that the thermal decomposition of HT began at 262.8 °C and ended at 409.7 °C with a main mass loss. It was demonstrated that a high heating rate (over 20 K·min -1 ) restrained the thermal decomposition of HT, resulting in an obvious thermal hysteresis. Furthermore, a thermal decomposition kinetics investigation of HT indicated that the non-isothermal decomposition mechanism was one-dimensional diffusion (D1), integral form g ( x ) = x ², and differential form f ( x ) = 1/(2 x ). The four combined approaches were employed to calculate the activation energy ( E = 128.50 kJ·mol -1 ) and Arrhenius preexponential factor (ln A = 24.39 min -1 ). In addition, a tentative mechanism of HT thermal decomposition was further developed. The results provide a theoretical reference for the potential thermal stability of HT.

  20. Polarimetric Decomposition Analysis of the Deepwater Horizon Oil Slick Using L-Band UAVSAR Data

    NASA Technical Reports Server (NTRS)

    Jones, Cathleen; Minchew, Brent; Holt, Benjamin

    2011-01-01

    We report here an analysis of the polarization dependence of L-band radar backscatter from the main slick of the Deepwater Horizon oil spill, with specific attention to the utility of polarimetric decomposition analysis for discrimination of oil from clean water and identification of variations in the oil characteristics. For this study we used data collected with the UAVSAR instrument from opposing look directions directly over the main oil slick. We find that both the Cloude-Pottier and Shannon entropy polarimetric decomposition methods offer promise for oil discrimination, with the Shannon entropy method yielding the same information as contained in the Cloude-Pottier entropy and averaged in tensity parameters, but with significantly less computational complexity

  1. A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis

    NASA Astrophysics Data System (ADS)

    Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.

    2016-12-01

    Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.

  2. Ozone decomposition

    PubMed Central

    Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho

    2014-01-01

    Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880

  3. Pressure-dependent decomposition kinetics of the energetic material HMX up to 3.6 GPa.

    PubMed

    Glascoe, Elizabeth A; Zaug, Joseph M; Burnham, Alan K

    2009-12-03

    The effect of pressure on the global thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Global decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low-to-moderate pressures (i.e., between ambient pressure and 0.1 GPa) and decelerates the decomposition at higher pressures. The decomposition acceleration is attributed to pressure-enhanced autocatalysis, whereas the deceleration at high pressures is attributed to pressure-inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both the beta- and delta-polymorphs of HMX are sensitive to pressure in the thermally induced decomposition kinetics.

  4. Direct Numerical Simulation of Pebble Bed Flows: Database Development and Investigation of Low-Frequency Temporal Instabilities

    DOE PAGES

    Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.

    2017-02-20

    Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less

  5. International journal of computational fluid dynamics real-time prediction of unsteady flow based on POD reduced-order model and particle filter

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru

    2016-04-01

    An integrated method consisting of a proper orthogonal decomposition (POD)-based reduced-order model (ROM) and a particle filter (PF) is proposed for real-time prediction of an unsteady flow field. The proposed method is validated using identical twin experiments of an unsteady flow field around a circular cylinder for Reynolds numbers of 100 and 1000. In this study, a PF is employed (ROM-PF) to modify the temporal coefficient of the ROM based on observation data because the prediction capability of the ROM alone is limited due to the stability issue. The proposed method reproduces the unsteady flow field several orders faster than a reference numerical simulation based on Navier-Stokes equations. Furthermore, the effects of parameters, related to observation and simulation, on the prediction accuracy are studied. Most of the energy modes of the unsteady flow field are captured, and it is possible to stably predict the long-term evolution with ROM-PF.

  6. Multiplexing of spatial modes in the mid-IR region

    NASA Astrophysics Data System (ADS)

    Gailele, Lucas; Maweza, Loyiso; Dudley, Angela; Ndagano, Bienvenu; Rosales-Guzman, Carmelo; Forbes, Andrew

    2017-02-01

    Traditional optical communication systems optimize multiplexing in polarization and wavelength both trans- mitted in fiber and free-space to attain high bandwidth data communication. Yet despite these technologies, we are expected to reach a bandwidth ceiling in the near future. Communications using orbital angular momentum (OAM) carrying modes offers infinite dimensional states, providing means to increase link capacity by multiplexing spatially overlapping modes in both the azimuthal and radial degrees of freedom. OAM modes are multiplexed and de-multiplexed by the use of spatial light modulators (SLM). Implementation of complex amplitude modulation is employed on laser beams phase and amplitude to generate Laguerre-Gaussian (LG) modes. Modal decomposition is employed to detect these modes due to their orthogonality as they propagate in space. We demonstrate data transfer by sending images as a proof-of concept in a lab-based scheme. We demonstrate the creation and detection of OAM modes in the mid-IR region as a precursor to a mid-IR free-space communication link.

  7. Extending the length and time scales of Gram-Schmidt Lyapunov vector computations

    NASA Astrophysics Data System (ADS)

    Costa, Anthony B.; Green, Jason R.

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  8. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  9. New Control Over Silicone Synthesis using SiH Chemistry: The Piers-Rubinsztajn Reaction.

    PubMed

    Brook, Michael A

    2018-06-18

    There is a strong imperative to synthesize polymers with highly controlled structures and narrow property ranges. Silicone polymers do not lend themselves to this paradigm because acids or bases lead to siloxane equilibration and loss of structure. By contrast, elegant levels of control are possible when using the Piers-Rubinsztajn reaction and analogues, in which the hydrophobic, strong Lewis acid B(C 6 F 5 ) 3 activates SiH groups, permitting the synthesis of precise siloxanes under mild conditions in high yield; siloxane decomposition processes are slow under these conditions. A broad range of oxygen nucleophiles including alkoxysilanes, silanols, phenols, and aryl alkyl ethers participate in the reaction to create elastomers, foams and green composites, for example, derived from lignin. In addition, the process permits the synthesis of monofunctional dendrons that can be assembled into larger entities including highly branched silicones and dendrimers either using the Piers-Rubinsztajn process alone, or in combination with hydrosilylation or other orthogonal reactions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Localised burst reconstruction from space-time PODs in a turbulent channel

    NASA Astrophysics Data System (ADS)

    Garcia-Gutierrez, Adrian; Jimenez, Javier

    2017-11-01

    The traditional proper orthogonal decomposition of the turbulent velocity fluctuations in a channel is extended to time under the assumption that the attractor is statistically stationary and can be treated as periodic for long-enough times. The objective is to extract space- and time-localised eddies that optimally represent the kinetic energy (and two-event correlation) of the flow. Using time-resolved data of a small-box simulation at Reτ = 1880 , minimal for y / h 0.25 , PODs are computed from the two-point spectral-density tensor Φ(kx ,kz , y ,y' , ω) . They are Fourier components in x, z and time, and depend on y and on the temporal frequency ω, or, equivalently, on the convection velocity c = ω /kx . Although the latter depends on y, a spatially and temporally localised `burst' can be synthesised by adding a range of PODs with specific phases. The results are localised bursts that are amplified and tilted, in a time-periodic version of Orr-like behaviour. Funded by the ERC COTURB project.

  11. Direct Numerical Simulation of Pebble Bed Flows: Database Development and Investigation of Low-Frequency Temporal Instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.

    Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less

  12. Phase retrieval in annulus sector domain by non-iterative methods

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Mao, Heng; Zhao, Da-zun

    2008-03-01

    Phase retrieval could be achieved by solving the intensity transport equation (ITE) under the paraxial approximation. For the case of uniform illumination, Neumann boundary condition is involved and it makes the solving process more complicated. The primary mirror is usually designed segmented in the telescope with large aperture, and the shape of a segmented piece is often like an annulus sector. Accordingly, It is necessary to analyze the phase retrieval in the annulus sector domain. Two non-iterative methods are considered for recovering the phase. The matrix method is based on the decomposition of the solution into a series of orthogonalized polynomials, while the frequency filtering method depends on the inverse computation process of ITE. By the simulation, it is found that both methods can eliminate the effect of Neumann boundary condition, save a lot of computation time and recover the distorted phase well. The wavefront error (WFE) RMS can be less than 0.05 wavelength, even when some noise is added.

  13. Identification of Reduced-Order Thermal Therapy Models Using Thermal MR Images: Theory and Validation

    PubMed Central

    2013-01-01

    In this paper, we develop and validate a method to identify computationally efficient site- and patient-specific models of ultrasound thermal therapies from MR thermal images. The models of the specific absorption rate of the transduced energy and the temperature response of the therapy target are identified in the reduced basis of proper orthogonal decomposition of thermal images, acquired in response to a mild thermal test excitation. The method permits dynamic reidentification of the treatment models during the therapy by recursively utilizing newly acquired images. Such adaptation is particularly important during high-temperature therapies, which are known to substantially and rapidly change tissue properties and blood perfusion. The developed theory was validated for the case of focused ultrasound heating of a tissue phantom. The experimental and computational results indicate that the developed approach produces accurate low-dimensional treatment models despite temporal and spatial noises in MR images and slow image acquisition rate. PMID:22531754

  14. Compositions of orthogonal lysyl-tRNA and aminoacyl-tRNA synthetase pairs and uses thereof

    DOEpatents

    Anderson, J Christopher [San Francisco, CA; Wu, Ning [Brookline, MA; Santoro, Stephen [Cambridge, MA; Schultz, Peter G [La Jolla, CA

    2009-12-29

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal lysyl-tRNAs, orthogonal lysyl-aminoacyl-tRNA synthetases, and orthogonal pairs of lysyl-tRNAs/synthetases, which incorporate homoglutamines into proteins are provided in response to a four base codon. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with homoglutamines using these orthogonal pairs.

  15. Compositions of orthogonal lysyl-tRNA and aminoacyl-tRNA synthetase pairs and uses thereof

    DOEpatents

    Anderson, J Christopher [San Francisco, CA; Wu, Ning [Brookline, MA; Santoro, Stephen [Cambridge, MA; Schultz, Peter G [La Jolla, CA

    2011-10-04

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal lysyl-tRNAs, orthogonal lysyl-aminoacyl-tRNA synthetases, and orthogonal pairs of lysyl-tRNAs/synthetases, which incorporate homoglutamines into proteins are provided in response to a four base codon. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with homoglutamines using these orthogonal pairs.

  16. Compositions of orthogonal lysyl-tRNA and aminoacyl-tRNA synthetase pairs and uses thereof

    DOEpatents

    Anderson, J Christopher [San Francisco, CA; Wu, Ning [Brookline, MA; Santoro, Stephen [Cambridge, MA; Schultz, Peter G [La Jolla, CA

    2009-08-18

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal lysyl-tRNAs, orthogonal lysyl-aminoacyl-tRNA synthetases, and orthogonal pairs of lysyl-tRNAs/synthetases, which incorporate homoglutamines into proteins are provided in response to a four base codon. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with homoglutamines using these orthogonal pairs.

  17. An analysis of scatter decomposition

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1990-01-01

    A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.

  18. An optimization approach for fitting canonical tensor decompositions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less

  19. Perfluoropolyalkylether decomposition on catalytic aluminas

    NASA Technical Reports Server (NTRS)

    Morales, Wilfredo

    1994-01-01

    The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25.

  20. Modelling the influence of ectomycorrhizal decomposition on plant nutrition and soil carbon sequestration in boreal forest ecosystems.

    PubMed

    Baskaran, Preetisri; Hyvönen, Riitta; Berglund, S Linnea; Clemmensen, Karina E; Ågren, Göran I; Lindahl, Björn D; Manzoni, Stefano

    2017-02-01

    Tree growth in boreal forests is limited by nitrogen (N) availability. Most boreal forest trees form symbiotic associations with ectomycorrhizal (ECM) fungi, which improve the uptake of inorganic N and also have the capacity to decompose soil organic matter (SOM) and to mobilize organic N ('ECM decomposition'). To study the effects of 'ECM decomposition' on ecosystem carbon (C) and N balances, we performed a sensitivity analysis on a model of C and N flows between plants, SOM, saprotrophs, ECM fungi, and inorganic N stores. The analysis indicates that C and N balances were sensitive to model parameters regulating ECM biomass and decomposition. Under low N availability, the optimal C allocation to ECM fungi, above which the symbiosis switches from mutualism to parasitism, increases with increasing relative involvement of ECM fungi in SOM decomposition. Under low N conditions, increased ECM organic N mining promotes tree growth but decreases soil C storage, leading to a negative correlation between C stores above- and below-ground. The interplay between plant production and soil C storage is sensitive to the partitioning of decomposition between ECM fungi and saprotrophs. Better understanding of interactions between functional guilds of soil fungi may significantly improve predictions of ecosystem responses to environmental change. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, E A; Zaug, J M; Burnham, A K

    The effect of pressure on the thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low to moderate pressures (i.e. between ambient pressure and 1 GPa) and decelerates the decomposition at higher pressures.more » The decomposition acceleration is attributed to pressure enhanced autocatalysis whereas the deceleration at high pressures is attributed pressure inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both {beta} and {delta} phase HMX are sensitive to pressure in the thermally induced decomposition kinetics.« less

  2. GHGs and air pollutants embodied in China’s international trade: Temporal and spatial index decomposition analysis

    PubMed Central

    Liu, Zhengyan; Mao, Xianqiang; Song, Peng

    2017-01-01

    Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399

  3. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  4. A novel all-optical label processing for OPS networks based on multiple OOC sequences from multiple-groups OOC

    NASA Astrophysics Data System (ADS)

    Qiu, Kun; Zhang, Chongfu; Ling, Yun; Wang, Yibo

    2007-11-01

    This paper proposes an all-optical label processing scheme using multiple optical orthogonal codes sequences (MOOCS) for optical packet switching (OPS) (MOOCS-OPS) networks, for the first time to the best of our knowledge. In this scheme, the multiple optical orthogonal codes (MOOC) from multiple-groups optical orthogonal codes (MGOOC) are permuted and combined to obtain the MOOCS for the optical labels, which are used to effectively enlarge the capacity of available optical codes for optical labels. The optical label processing (OLP) schemes are reviewed and analyzed, the principles of MOOCS-based optical labels for OPS networks are given, and analyzed, then the MOOCS-OPS topology and the key realization units of the MOOCS-based optical label packets are studied in detail, respectively. The performances of this novel all-optical label processing technology are analyzed, the corresponding simulation is performed. These analysis and results show that the proposed scheme can overcome the lack of available optical orthogonal codes (OOC)-based optical labels due to the limited number of single OOC for optical label with the short code length, and indicate that the MOOCS-OPS scheme is feasible.

  5. Analysis of exergy efficiency of a super-critical compressed carbon dioxide energy-storage system based on the orthogonal method.

    PubMed

    He, Qing; Hao, Yinping; Liu, Hui; Liu, Wenyi

    2018-01-01

    Super-critical carbon dioxide energy-storage (SC-CCES) technology is a new type of gas energy-storage technology. This paper used orthogonal method and variance analysis to make significant analysis on the factors which would affect the thermodynamics characteristics of the SC-CCES system and obtained the significant factors and interactions in the energy-storage process, the energy-release process and the whole energy-storage system. Results have shown that the interactions in the components have little influence on the energy-storage process, the energy-release process and the whole energy-storage process of the SC-CCES system, the significant factors are mainly on the characteristics of the system component itself, which will provide reference for the optimization of the thermal properties of the energy-storage system.

  6. Analysis of exergy efficiency of a super-critical compressed carbon dioxide energy-storage system based on the orthogonal method

    PubMed Central

    He, Qing; Liu, Hui; Liu, Wenyi

    2018-01-01

    Super-critical carbon dioxide energy-storage (SC-CCES) technology is a new type of gas energy-storage technology. This paper used orthogonal method and variance analysis to make significant analysis on the factors which would affect the thermodynamics characteristics of the SC-CCES system and obtained the significant factors and interactions in the energy-storage process, the energy-release process and the whole energy-storage system. Results have shown that the interactions in the components have little influence on the energy-storage process, the energy-release process and the whole energy-storage process of the SC-CCES system, the significant factors are mainly on the characteristics of the system component itself, which will provide reference for the optimization of the thermal properties of the energy-storage system. PMID:29634742

  7. Security analysis of orthogonal-frequency-division-multiplexing-based continuous-variable quantum key distribution with imperfect modulation

    NASA Astrophysics Data System (ADS)

    Zhang, Hang; Mao, Yu; Huang, Duan; Li, Jiawei; Zhang, Ling; Guo, Ying

    2018-05-01

    We introduce a reliable scheme for continuous-variable quantum key distribution (CV-QKD) by using orthogonal frequency division multiplexing (OFDM). As a spectrally efficient multiplexing technique, OFDM allows a large number of closely spaced orthogonal subcarrier signals used to carry data on several parallel data streams or channels. We place emphasis on modulator impairments which would inevitably arise in the OFDM system and analyze how these impairments affect the OFDM-based CV-QKD system. Moreover, we also evaluate the security in the asymptotic limit and the Pirandola-Laurenza-Ottaviani-Banchi upper bound. Results indicate that although the emergence of imperfect modulation would bring about a slight decrease in the secret key bit rate of each subcarrier, the multiplexing technique combined with CV-QKD results in a desirable improvement on the total secret key bit rate which can raise the numerical value about an order of magnitude.

  8. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications

    PubMed Central

    Revathy, M.; Saravanan, R.

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017

  9. Decomposition and particle release of a carbon nanotube/epoxy nanocomposite at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Schlagenhauf, Lukas; Kuo, Yu-Ying; Bahk, Yeon Kyoung; Nüesch, Frank; Wang, Jing

    2015-11-01

    Carbon nanotubes (CNTs) as fillers in nanocomposites have attracted significant attention, and one of the applications is to use the CNTs as flame retardants. For such nanocomposites, possible release of CNTs at elevated temperatures after decomposition of the polymer matrix poses potential health threats. We investigated the airborne particle release from a decomposing multi-walled carbon nanotube (MWCNT)/epoxy nanocomposite in order to measure a possible release of MWCNTs. An experimental set-up was established that allows decomposing the samples in a furnace by exposure to increasing temperatures at a constant heating rate and under ambient air or nitrogen atmosphere. The particle analysis was performed by aerosol measurement devices and by transmission electron microscopy (TEM) of collected particles. Further, by the application of a thermal denuder, it was also possible to measure non-volatile particles only. Characterization of the tested samples and the decomposition kinetics were determined by the usage of thermogravimetric analysis (TGA). The particle release of different samples was investigated, of a neat epoxy, nanocomposites with 0.1 and 1 wt% MWCNTs, and nanocomposites with functionalized MWCNTs. The results showed that the added MWCNTs had little effect on the decomposition kinetics of the investigated samples, but the weight of the remaining residues after decomposition was influenced significantly. The measurements with decomposition in different atmospheres showed a release of a higher number of particles at temperatures below 300 °C when air was used. Analysis of collected particles by TEM revealed that no detectable amount of MWCNTs was released, but micrometer-sized fibrous particles were collected.

  10. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  11. Limitations of Dower's inverse transform for the study of atrial loops during atrial fibrillation.

    PubMed

    Guillem, María S; Climent, Andreu M; Bollmann, Andreas; Husser, Daniela; Millet, José; Castells, Francisco

    2009-08-01

    Spatial characteristics of atrial fibrillatory waves have been extracted by using a vectorcardiogram (VCG) during atrial fibrillation (AF). However, the VCG is usually not recorded in clinical practice and atrial loops are derived from the 12-lead electrocardiogram (ECG). We evaluated the suitability of the reconstruction of orthogonal leads from the 12-lead ECG for fibrillatory waves in AF. We used the Physikalisch-Technische Bundesanstalt diagnostic ECG database, which contains 15 simultaneously recorded signals (12-lead ECG and three Frank orthogonal leads) of 13 patients during AF. Frank leads were derived from the 12-lead ECG by using Dower's inverse transform. Derived leads were then compared to true Frank leads in terms of the relative error achieved. We calculated the orientation of AF loops of both recorded orthogonal leads and derived leads and measured the difference in estimated orientation. Also, we investigated the relationship of errors in derivation with fibrillatory wave amplitude, frequency, wave residuum, and fit to a plane of the AF loops. Errors in derivation of AF loops were 68 +/- 31% and errors in the estimation of orientation were 35.85 +/- 20.43 degrees . We did not find any correlation among these errors and amplitude, frequency, or other parameters. In conclusion, Dower's inverse transform should not be used for the derivation of orthogonal leads from the 12-lead ECG for the analysis of fibrillatory wave loops in AF. Spatial parameters obtained after this derivation may differ from those obtained from recorded orthogonal leads.

  12. Stress Regression Analysis of Asphalt Concrete Deck Pavement Based on Orthogonal Experimental Design and Interlayer Contact

    NASA Astrophysics Data System (ADS)

    Wang, Xuntao; Feng, Jianhu; Wang, Hu; Hong, Shidi; Zheng, Supei

    2018-03-01

    A three-dimensional finite element box girder bridge and its asphalt concrete deck pavement were established by ANSYS software, and the interlayer bonding condition of asphalt concrete deck pavement was assumed to be contact bonding condition. Orthogonal experimental design is used to arrange the testing plans of material parameters, and an evaluation of the effect of different material parameters in the mechanical response of asphalt concrete surface layer was conducted by multiple linear regression model and using the results from the finite element analysis. Results indicated that stress regression equations can well predict the stress of the asphalt concrete surface layer, and elastic modulus of waterproof layer has a significant influence on stress values of asphalt concrete surface layer.

  13. Methods and compositions for the production of orthogonal tRNA-aminoacyl tRNA synthetase pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Peter G.; Wang, Lei; Anderson, John Christopher

    2015-10-20

    This invention provides compositions and methods for generating components of protein biosynthetic machinery including orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases. Methods for identifying orthogonal pairs are also provided. These components can be used to incorporate unnatural amino acids into proteins in vivo.

  14. Methods and composition for the production of orthogonal tRNA-aminoacyltRNA synthetase pairs

    DOEpatents

    Schultz, Peter G.; Wang, Lei; Anderson, John Christopher; Chin, Jason; Liu, David R.; Magliery, Thomas J.; Meggers, Eric L.; Mehl, Ryan Aaron; Pastrnak, Miro; Santoro, Stephen William; Zhang, Zhiwen

    2010-05-11

    This invention provides compositions and methods for generating components of protein biosynthetic machinery including orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases. Methods for identifying orthogonal pairs are also provided. These components can be used to incorporate unnatural amino acids into proteins in vivo.

  15. Methods and composition for the production of orthogonal tRNA-aminoacyltRNA synthetase pairs

    DOEpatents

    Schultz, Peter G [La Jolla, CA; Wang, Lei [San Diego, CA; Anderson, John Christopher [San Diego, CA; Chin, Jason [Cambridge, GB; Liu, David R [Lexington, MA; Magliery, Thomas J [North Haven, CT; Meggers, Eric L [Philadelphia, PA; Mehl, Ryan Aaron [Lancaster, PA; Pastrnak, Miro [San Diego, CA; Santoro, Steven William [Cambridge, MA; Zhang, Zhiwen [San Diego, CA

    2012-05-22

    This invention provides compositions and methods for generating components of protein biosynthetic machinery including orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases. Methods for identifying orthogonal pairs are also provided. These components can be used to incorporate unnatural amino acids into proteins in vivo.

  16. Methods and compositions for the production of orthogonal tRNA-aminoacyl tRNA synthetase pairs

    DOEpatents

    Schultz, Peter; Wang, Lei; Anderson, John Christopher; Chin, Jason; Liu, David R.; Magliery, Thomas J.; Meggers, Eric L.; Mehl, Ryan Aaron; Pastrnak, Miro; Santoro, Stephen William; Zhang, Zhiwen

    2006-08-01

    This invention provides compositions and methods for generating components of protein biosynthetic machinery including orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases. Methods for identifying orthogonal pairs are also provided. These components can be used to incorporate unnatural amino acids into proteins in vivo.

  17. Methods and composition for the production of orthogonal tRNA-aminoacyl tRNA synthetase pairs

    DOEpatents

    Schultz, Peter G [La Jolla, CA; Wang, Lei [San Diego, CA; Anderson, John Christopher [San Diego, CA; Chin, Jason W [San Diego, CA; Liu, David R [Lexington, MA; Magliery, Thomas J [North Haven, CT; Meggers, Eric L [Philadelphia, PA; Mehl, Ryan Aaron [San Diego, CA; Pastrnak, Miro [San Diego, CA; Santoro, Stephen William [San Diego, CA; Zhang, Zhiwen [San Diego, CA

    2012-05-08

    This invention provides compositions and methods for generating components of protein biosynthetic machinery including orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases. Methods for identifying orthogonal pairs are also provided. These components can be used to incorporate unnatural amino acids into proteins in vivo.

  18. Methods and compositions for the production of orthogonal tRNA-aminoacyl-tRNA synthetase pairs

    DOEpatents

    Schultz, Peter G [La Jolla, CA; Wang, Lei [San Diego, CA; Anderson, John Christopher [San Diego, CA; Chin, Jason W [San Diego, CA; Liu, David R [Lexington, MA; Magliery, Thomas J [North Haven, CT; Meggers, Eric L [Philadelphia, PA; Mehl, Ryan Aaron [San Diego, CA; Pastrnak, Miro [San Diego, CA; Santoro, Stephen William [San Diego, CA; Zhang, Zhiwen [San Diego, CA

    2011-09-06

    This invention provides compositions and methods for generating components of protein biosynthetic machinery including orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases. Methods for identifying orthogonal pairs are also provided. These components can be used to incorporate unnatural amino acids into proteins in vivo.

  19. Methods and composition for the production of orthogonal tRNA-aminoacyltRNA synthetase pairs

    DOEpatents

    Schultz, Peter G [La Jolla, CA; Wang, Lei [San Diego, CA; Anderson, John Christopher [San Diego, CA; Chin, Jason [Cambridge, GB; Liu, David R [Lexington, MA; Magliery, Thomas J [North Haven, CT; Meggers, Eric L [Philadelphia, PA; Mehl, Ryan Aaron [Lancaster, PA; Pastrnak, Miro [San Diego, CA; Santoro, Steven William [Cambridge, MA; Zhang, Zhiwen [San Diego, CA

    2008-04-08

    This invention provides compositions and methods for generating components of protein biosynthetic machinery including orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases. Methods for identifying orthogonal pairs are also provided. These components can be used to incorporate unnatural amino acids into proteins in vivo.

  20. s-core network decomposition: A generalization of k-core analysis to weighted networks

    NASA Astrophysics Data System (ADS)

    Eidsaa, Marius; Almaas, Eivind

    2013-12-01

    A broad range of systems spanning biology, technology, and social phenomena may be represented and analyzed as complex networks. Recent studies of such networks using k-core decomposition have uncovered groups of nodes that play important roles. Here, we present s-core analysis, a generalization of k-core (or k-shell) analysis to complex networks where the links have different strengths or weights. We demonstrate the s-core decomposition approach on two random networks (ER and configuration model with scale-free degree distribution) where the link weights are (i) random, (ii) correlated, and (iii) anticorrelated with the node degrees. Finally, we apply the s-core decomposition approach to the protein-interaction network of the yeast Saccharomyces cerevisiae in the context of two gene-expression experiments: oxidative stress in response to cumene hydroperoxide (CHP), and fermentation stress response (FSR). We find that the innermost s-cores are (i) different from innermost k-cores, (ii) different for the two stress conditions CHP and FSR, and (iii) enriched with proteins whose biological functions give insight into how yeast manages these specific stresses.

  1. Catalytic and inhibiting effects of lithium peroxide and hydroxide on sodium chlorate decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, J.C.; Zhang, Y.

    1995-09-01

    Chemical oxygen generators based on sodium chlorate and lithium perchlorate are used in airplanes, submarines, diving, and mine rescue. Catalytic decomposition of sodium chlorate in the presence of cobalt oxide, lithium peroxide, and lithium hydroxide is studied using thermal gravimetric analysis. Lithium peroxide and hydroxide are both moderately active catalysts for the decomposition of sodium chlorate when used alone, and inhibitors when used with the more active catalyst cobalt oxide.

  2. Detection of decomposition volatile organic compounds in soil following removal of remains from a surface deposition site.

    PubMed

    Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L

    2015-09-01

    Cadaver-detection dogs use volatile organic compounds (VOCs) to search for human remains including those deposited on or beneath soil. Soil can act as a sink for VOCs, causing loading of decomposition VOCs in the soil following soft tissue decomposition. The objective of this study was to chemically profile decomposition VOCs from surface decomposition sites after remains were removed from their primary location. Pig carcasses were used as human analogues and were deposited on a soil surface to decompose for 3 months. The remains were then removed from each site and VOCs were collected from the soil for 7 months thereafter and analyzed by comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS). Decomposition VOCs diminished within 6 weeks and hydrocarbons were the most persistent compound class. Decomposition VOCs could still be detected in the soil after 7 months using Principal Component Analysis. This study demonstrated that the decomposition VOC profile, while detectable by GC×GC-TOFMS in the soil, was considerably reduced and altered in composition upon removal of remains. Chemical reference data is provided by this study for future investigations of canine alert behavior in scenarios involving scattered or scavenged remains.

  3. Bi-orthogonal Symbol Mapping and Detection in Optical CDMA Communication System

    NASA Astrophysics Data System (ADS)

    Liu, Maw-Yang

    2017-12-01

    In this paper, the bi-orthogonal symbol mapping and detection scheme is investigated in time-spreading wavelength-hopping optical CDMA communication system. The carrier-hopping prime code is exploited as signature sequence, whose put-of-phase autocorrelation is zero. Based on the orthogonality of carrier-hopping prime code, the equal weight orthogonal signaling scheme can be constructed, and the proposed scheme using bi-orthogonal symbol mapping and detection can be developed. The transmitted binary data bits are mapped into corresponding bi-orthogonal symbols, where the orthogonal matrix code and its complement are utilized. In the receiver, the received bi-orthogonal data symbol is fed into the maximum likelihood decoder for detection. Under such symbol mapping and detection, the proposed scheme can greatly enlarge the Euclidean distance; hence, the system performance can be drastically improved.

  4. Spin-exchange relaxation-free magnetometer with nearly parallel pump and probe beams

    DOE PAGES

    Karaulanov, Todor; Savukov, Igor; Kim, Young Jin

    2016-03-22

    We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less

  5. Site-specific incorporation of redox active amino acids into proteins

    DOEpatents

    Alfonta, Lital [San Diego, CA; Schultz, Peter G [La Jolla, CA; Zhang, Zhiwen [Austin, TX

    2011-08-30

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases, which incorporate redox active amino acids into proteins are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with redox active amino acids using these orthogonal pairs.

  6. Site-specific incorporation of redox active amino acids into proteins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonta, Lital; Schultz, Peter G.; Zhang, Zhiwen

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases, which incorporate redox active amino acids into proteins are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with redox active amino acids using these orthogonal pairs.

  7. Site specific incorporation of keto amino acids into proteins

    DOEpatents

    Schultz, Peter G [La Jolla, CA; Wang, Lei [San Diego, CA

    2011-03-22

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases, which incorporate keto amino acids into proteins are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with keto amino acids using these orthogonal pairs.

  8. Site-specific incorporation of redox active amino acids into proteins

    DOEpatents

    Alfonta, Lital [San Diego, CA; Schultz, Peter G [La Jolla, CA; Zhang, Zhiwen [San Diego, CA

    2012-02-14

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases, which incorporate redox active amino acids into proteins are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with redox active amino acids using these orthogonal pairs.

  9. Site specific incorporation of keto amino acids into proteins

    DOEpatents

    Schultz, Peter G [La Jolla, CA; Wang, Lei [San Diego, CA

    2008-10-07

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases, which incorporate keto amino acids into proteins are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with keto amino acids using these orthogonal pairs.

  10. Site-specific incorporation of redox active amino acids into proteins

    DOEpatents

    Alfonta; Lital , Schultz; Peter G. , Zhang; Zhiwen

    2010-10-12

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases, which incorporate redox active amino acids into proteins are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with redox active amino acids using these orthogonal pairs.

  11. Site specific incorporation of keto amino acids into proteins

    DOEpatents

    Schultz, Peter G [La Jolla, CA; Wang, Lei [San Diego, CA

    2011-12-06

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases, which incorporate keto amino acids into proteins are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with keto amino acids using these orthogonal pairs.

  12. Site-specific incorporation of redox active amino acids into proteins

    DOEpatents

    Alfonta, Lital [San Diego, CA; Schultz, Peter G [La Jolla, CA; Zhang, Zhiwen [San Diego, CA

    2009-02-24

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases, which incorporate redox active amino acids into proteins are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with redox active amino acids using these orthogonal pairs.

  13. Site specific incorporation of keto amino acids into proteins

    DOEpatents

    Schultz, Peter G [La Jolla, CA; Wang, Lei [San Diego, CA

    2012-02-14

    Compositions and methods of producing components of protein biosynthetic machinery that include orthogonal tRNAs, orthogonal aminoacyl-tRNA synthetases, and orthogonal pairs of tRNAs/synthetases, which incorporate keto amino acids into proteins are provided. Methods for identifying these orthogonal pairs are also provided along with methods of producing proteins with keto amino acids using these orthogonal pairs.

  14. Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.

    2016-12-01

    We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).

  15. Effective metrics and a fully covariant description of constitutive tensors in electrodynamics

    NASA Astrophysics Data System (ADS)

    Schuster, Sebastian; Visser, Matt

    2017-12-01

    Using electromagnetism to study analogue space-times is tantamount to considering consistency conditions for when a given (meta-) material would provide an analogue space-time model or—vice versa—characterizing which given metric could be modeled with a (meta-) material. While the consistency conditions themselves are by now well known and studied, the form the metric takes once they are satisfied is not. This question is mostly easily answered by keeping the formalisms of the two research fields here in contact as close to each other as possible. While fully covariant formulations of the electrodynamics of media have been around for a long while, they are usually abandoned for (3 +1 )- or six-dimensional formalisms. Here we use the fully unified and fully covariant approach. This enables us even to generalize the consistency conditions for the existence of an effective metric to arbitrary background metrics beyond flat space-time electrodynamics. We also show how the familiar matrices for permittivity ɛ , permeability μ-1, and magnetoelectric effects ζ can be seen as the three independent pieces of the Bel decomposition for the constitutive tensor Za b c d, i.e., the components of an orthogonal decomposition with respect to a given observer with four-velocity Va. Finally, we use the Moore-Penrose pseudoinverse and the closely related pseudodeterminant to then gain the desired reconstruction of the effective metric in terms of the permittivity tensor ɛa b, the permeability tensor [μ-1]a b, and the magnetoelectric tensor ζa b, as an explicit function geff(ɛ ,μ-1,ζ ).

  16. Thermal decomposition behavior of nano/micro bimodal feedstock with different solids loading

    NASA Astrophysics Data System (ADS)

    Oh, Joo Won; Lee, Won Sik; Park, Seong Jin

    2018-01-01

    Debinding is one of the most critical processes for powder injection molding. The parts in debinding process are vulnerable to defect formation, and long processing time of debinding decreases production rate of whole process. In order to determine the optimal condition for debinding process, decomposition behavior of feedstock should be understood. Since nano powder affects the decomposition behavior of feedstock, nano powder effect needs to be investigated for nano/micro bimodal feedstock. In this research, nano powder effect on decomposition behavior of nano/micro bimodal feedstock has been studied. Bimodal powders were fabricated with different ratios of nano powder, and the critical solids loading of each powder was measured by torque rheometer. Three different feedstocks were fabricated for each powder depending on solids loading condition. Thermogravimetric analysis (TGA) experiment was carried out to analyze the thermal decomposition behavior of the feedstocks, and decomposition activation energy was calculated. The result indicated nano powder showed limited effect on feedstocks in lower solids loading condition than optimal range. Whereas, it highly influenced the decomposition behavior in optimal solids loading condition by causing polymer chain scission with high viscosity.

  17. Search for memory effects in methane hydrate: structure of water before hydrate formation and after hydrate decomposition.

    PubMed

    Buchanan, Piers; Soper, Alan K; Thompson, Helen; Westacott, Robin E; Creek, Jefferson L; Hobson, Greg; Koh, Carolyn A

    2005-10-22

    Neutron diffraction with HD isotope substitution has been used to study the formation and decomposition of the methane clathrate hydrate. Using this atomistic technique coupled with simultaneous gas consumption measurements, we have successfully tracked the formation of the sI methane hydrate from a water/gas mixture and then the subsequent decomposition of the hydrate from initiation to completion. These studies demonstrate that the application of neutron diffraction with simultaneous gas consumption measurements provides a powerful method for studying the clathrate hydrate crystal growth and decomposition. We have also used neutron diffraction to examine the water structure before the hydrate growth and after the hydrate decomposition. From the neutron-scattering curves and the empirical potential structure refinement analysis of the data, we find that there is no significant difference between the structure of water before the hydrate formation and the structure of water after the hydrate decomposition. Nor is there any significant change to the methane hydration shell. These results are discussed in the context of widely held views on the existence of memory effects after the hydrate decomposition.

  18. Conditioned empirical orthogonal functions for interpolation of runoff time series along rivers: Application to reconstruction of missing monthly records

    NASA Astrophysics Data System (ADS)

    Li, Lingqi; Gottschalk, Lars; Krasovskaia, Irina; Xiong, Lihua

    2018-01-01

    Reconstruction of missing runoff data is of important significance to solve contradictions between the common situation of gaps and the fundamental necessity of complete time series for reliable hydrological research. The conventional empirical orthogonal functions (EOF) approach has been documented to be useful for interpolating hydrological series based upon spatiotemporal decomposition of runoff variation patterns, without additional measurements (e.g., precipitation, land cover). This study develops a new EOF-based approach (abbreviated as CEOF) that conditions EOF expansion on the oscillations at outlet (or any other reference station) of a target basin and creates a set of residual series by removing the dependence on this reference series, in order to redefine the amplitude functions (components). This development allows a transparent hydrological interpretation of the dimensionless components and thereby strengthens their capacities to explain various runoff regimes in a basin. The two approaches are demonstrated on an application of discharge observations from the Ganjiang basin, China. Two alternatives for determining amplitude functions based on centred and standardised series, respectively, are tested. The convergence in the reconstruction of observations at different sites as a function of the number of components and its relation to the characteristics of the site are analysed. Results indicate that the CEOF approach offers an efficient way to restore runoff records with only one to four components; it shows more superiority in nested large basins than at headwater sites and often performs better than the EOF approach when using standardised series, especially in improving infilling accuracy for low flows. Comparisons against other interpolation methods (i.e., nearest neighbour, linear regression, inverse distance weighting) further confirm the advantage of the EOF-based approaches in avoiding spatial and temporal inconsistencies in estimated series.

  19. Generalization of Jacobi's Decomposition Theorem to the Rotation and Translation of a Solid in a Fluid.

    NASA Astrophysics Data System (ADS)

    Chiang, Rong-Chang

    Jacobi found that the rotation of a symmetrical heavy top about a fixed point is composed of the two torque -free rotations of two triaxial bodies about their centers of mass. His discovery rests on the fact that the orthogonal matrix which represents the rotation of a symmetrical heavy top is decomposed into a product of two orthogonal matrices, each of which represents the torque-free rotations of two triaxial bodies. This theorem is generalized to the Kirchhoff's case of the rotation and translation of a symmetrical solid in a fluid. This theorem requires the explicit computation, by means of theta functions, of the nine direction cosines between the rotating body axes and the fixed space axes. The addition theorem of theta functions makes it possible to decompose the rotational matrix into a product of similar matrices. This basic idea of utilizing the addition theorem is simple but the carry-through of the computation is quite involved and the full proof turns out to be a lengthy process of computing rather long and complex expressions. For the translational motion we give a new treatment. The position of the center of mass as a function of the time is found by a direct evaluation of the elliptic integral by means of a new theta interpretation of Legendre's reduction formula of the elliptic integral. For the complete solution of the problem we have added further the study of the physical aspects of the motion. Based on a complete examination of the all possible manifolds of the steady helical cases it is possible to obtain a full qualitative description of the motion. Many numerical examples and graphs are given to illustrate the rotation and translation of the solid in a fluid.

  20. Keratin decomposition by trogid beetles: evidence from a feeding experiment and stable isotope analysis

    NASA Astrophysics Data System (ADS)

    Sugiura, Shinji; Ikeda, Hiroshi

    2014-03-01

    The decomposition of vertebrate carcasses is an important ecosystem function. Soft tissues of dead vertebrates are rapidly decomposed by diverse animals. However, decomposition of hard tissues such as hairs and feathers is much slower because only a few animals can digest keratin, a protein that is concentrated in hairs and feathers. Although beetles of the family Trogidae are considered keratin feeders, their ecological function has rarely been explored. Here, we investigated the keratin-decomposition function of trogid beetles in heron-breeding colonies where keratin was frequently supplied as feathers. Three trogid species were collected from the colonies and observed feeding on heron feathers under laboratory conditions. We also measured the nitrogen (δ15N) and carbon (δ13C) stable isotope ratios of two trogid species that were maintained on a constant diet (feathers from one heron individual) during 70 days under laboratory conditions. We compared the isotopic signatures of the trogids with the feathers to investigate isotopic shifts from the feathers to the consumers for δ15N and δ13C. We used mixing models (MixSIR and SIAR) to estimate the main diets of individual field-collected trogid beetles. The analysis indicated that heron feathers were more important as food for trogid beetles than were soft tissues under field conditions. Together, the feeding experiment and stable isotope analysis provided strong evidence of keratin decomposition by trogid beetles.

Top