NASA Astrophysics Data System (ADS)
Sridhar, J.
2015-12-01
The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
A knowledge-based tool for multilevel decomposition of a complex design problem
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.
Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...
Mechanism of thermal decomposition of K2FeO4 and BaFeO4: A review
NASA Astrophysics Data System (ADS)
Sharma, Virender K.; Machala, Libor
2016-12-01
This paper presents thermal decomposition of potassium ferrate(VI) (K2FeO4) and barium ferrate(VI) (BaFeO4) in air and nitrogen atmosphere. Mössbauer spectroscopy and nuclear forward scattering (NFS) synchrotron radiation approaches are reviewed to advance understanding of electron-transfer processes involved in reduction of ferrate(VI) to Fe(III) phases. Direct evidences of Fe V and Fe IV as intermediate iron species using the applied techniques are given. Thermal decomposition of K2FeO4 involved Fe V, Fe IV, and K3FeO3 as intermediate species while BaFeO3 (i.e. Fe IV) was the only intermediate species during the decomposition of BaFeO4. Nature of ferrite species, formed as final Fe(III) species, of thermal decomposition of K2FeO4 and BaFeO4 under different conditions are evaluated. Steps of the mechanisms of thermal decomposition of ferrate(VI), which reasonably explained experimental observations of applied approaches in conjunction with thermal and surface techniques, are summarized.
Characteristic-eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1989-01-01
Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Vanroony, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
The use of the Cholesky decomposition technique is analyzed as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g. as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stablity problems are briefly discussed.
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Van Rooy, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
This report analyzes the use of the modified Cholesky decomposition technique as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g., as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stability problems are briefly discussed.
Identification of particle-laden flow features from wavelet decomposition
NASA Astrophysics Data System (ADS)
Jackson, A.; Turnbull, B.
2017-12-01
A wavelet decomposition based technique is applied to air pressure data obtained from laboratory-scale powder snow avalanches. This technique is shown to be a powerful tool for identifying both repeatable and chaotic features at any frequency within the signal. Additionally, this technique is demonstrated to be a robust method for the removal of noise from the signal as well as being capable of removing other contaminants from the signal. Whilst powder snow avalanches are the focus of the experiments analysed here, the features identified can provide insight to other particle-laden gravity currents and the technique described is applicable to a wide variety of experimental signals.
NASA Astrophysics Data System (ADS)
Othman, Adel A. A.; Fathy, M.; Negm, Adel
2018-06-01
The Temsah field is located in eastern part of the Nile delta to seaward. The main reservoirs of the area are Middle Pliocene mainly consist from siliciclastic which associated with a close deep marine environment. The Distribution pattern of the reservoir facies is limited scale indicating fast lateral and vertical changes which are not easy to resolve by applying of conventional seismic attribute. The target of the present study is to create geophysical workflows to a better image of the channel sand distribution in the study area. We apply both Average Absolute Amplitude and Energy attribute which are indicated on the distribution of the sand bodies in the study area but filled to fully described the channel geometry. So another tool, which offers more detailed geometry description is needed. The spectral decomposition analysis method is an alternative technique focused on processing Discrete Fourier Transform which can provide better results. Spectral decomposition have been done over the upper channel shows that the frequency in the eastern part of the channel is the same frequency in places where the wells are drilled, which confirm the connection of both the eastern and western parts of the upper channel. Results suggest that application of the spectral decomposition method leads to reliable inferences. Hence, using the spectral decomposition method alone or along with other attributes has a positive impact on reserves growth and increased production where the reserve in the study area increases to 75bcf.
Performance of Scattering Matrix Decomposition and Color Spaces for Synthetic Aperture Radar Imagery
2010-03-01
Color Spaces and Synthetic Aperture Radar (SAR) Multicolor Imaging. 15 2.3.1 Colorimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.2...III. Decomposition Techniques on SAR Polarimetry and Colorimetry applied to SAR Imagery...space polarimetric SAR systems. Colorimetry is also introduced in this chapter, presenting the fundamentals of the RGB and CMY color spaces, defined for
ERIC Educational Resources Information Center
Feng, Mingyu; Beck, Joseph E.; Heffernan, Neil T.
2009-01-01
A basic question of instructional interventions is how effective it is in promoting student learning. This paper presents a study to determine the relative efficacy of different instructional strategies by applying an educational data mining technique, learning decomposition. We use logistic regression to determine how much learning is caused by…
New monitoring by thermogravimetry for radiation degradation of EVA
NASA Astrophysics Data System (ADS)
Boguski, J.; Przybytniak, G.; Łyczko, K.
2014-07-01
The radiation ageing of ethylene vinyl-acetate copolymer (EVA) as the jacket of cable applied in nuclear power plant was carried out by gamma rays irradiation, and the degradation was monitored by a thermo-gravimetric analysis (TGA). The EVA decomposition rate in air by the isothermal at 400 °C decreased with increase of dose and also with decrease of the dose rate. The behavior of EVA jacket of cable indicated that the decomposition rate at 400 °C was reduced with increase of oxidation. The elongation at break by tensile test for the radiation aged EVA was closely related to the decomposition rate at 400 °C; therefore, the TGA might be applied for a diagnostic technique of the cable degradation.
Benhammouda, Brahim
2016-01-01
Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.
Characteristic eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1991-01-01
The proper orthogonal decomposition technique (Lumley's decomposition) is applied to the turbulent flow in a channel to extract coherent structures by decomposing the velocity field into characteristic eddies with random coefficients. In the homogeneous spatial directions, a generaliztion of the shot-noise expansion is used to determine the characteristic eddies. In this expansion, the Fourier coefficients of the characteristic eddy cannot be obtained from the second-order statistics. Three different techniques are used to determine the phases of these coefficients. They are based on: (1) the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Results from these three techniques are found to be similar in most respects. The implications of these techniques and the shot-noise expansion are discussed. The dominant eddy is found to contribute as much as 76 percent to the turbulent kinetic energy. In both 2D and 3D, the characteristic eddies consist of an ejection region straddled by streamwise vortices that leave the wall in the very short streamwise distance of about 100 wall units.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallagher, Neal B.; Blake, Thomas A.; Gassman, Paul L.
2006-07-01
Multivariate curve resolution (MCR) is a powerful technique for extracting chemical information from measured spectra on complex mixtures. The difficulty with applying MCR to soil reflectance measurements is that light scattering artifacts can contribute much more variance to the measurements than the analyte(s) of interest. Two methods were integrated into a MCR decomposition to account for light scattering effects. Firstly, an extended mixture model using pure analyte spectra augmented with scattering ‘spectra’ was used for the measured spectra. And secondly, second derivative preprocessed spectra, which have higher selectivity than the unprocessed spectra, were included in a second block as amore » part of the decomposition. The conventional alternating least squares (ALS) algorithm was modified to simultaneously decompose the measured and second derivative spectra in a two-block decomposition. Equality constraints were also included to incorporate information about sampling conditions. The result was an MCR decomposition that provided interpretable spectra from soil reflectance measurements.« less
Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition
NASA Astrophysics Data System (ADS)
Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso
2005-04-01
Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.
A technique for plasma velocity-space cross-correlation
NASA Astrophysics Data System (ADS)
Mattingly, Sean; Skiff, Fred
2018-05-01
An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
food science. Matthew's research at NREL is focused on applying uncertainty quantification techniques . Research Interests Uncertainty quantification Computational multilinear algebra Approximation theory of and the Canonical Tensor Decomposition, Journal of Computational Physics (2017) Randomized Alternating
Tissue artifact removal from respiratory signals based on empirical mode decomposition.
Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-05-01
On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
Electrochemical Protection of Thin Film Electrodes in Solid State Nanopores
Harrer, Stefan; Waggoner, Philip S.; Luan, Binquan; Afzali-Ardakani, Ali; Goldfarb, Dario L.; Peng, Hongbo; Martyna, Glenn; Rossnagel, Stephen M.; Stolovitzky, Gustavo A.
2011-01-01
We have eliminated electrochemical surface oxidation and reduction as well as water decomposition inside sub-5-nm wide nanopores in conducting TiN membranes using a surface passivation technique. Nanopore ionic conductances, and therefore pore diameters, were unchanged in passivated pores after applying potentials of ±4.5 V for as long as 24 h. Water decomposition was eliminated by using aqueous 90% glycerol solvent. The use of a protective self-assembled monolayer of hexadecylphosphonic acid was also investigated. PMID:21597142
NASA Astrophysics Data System (ADS)
Udomsungworagul, A.; Charnsethikul, P.
2018-03-01
This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Supriya; Srivastava, Pratibha; Singh, Gurdip, E-mail: gsingh4us@yahoo.com
2013-02-15
Graphical abstract: Prepared nanoferrites were characterized by FE-SEM and bright field TEM micrographs. The catalytic effect of these nanoferrites was evaluated on the thermal decomposition of ammonium perchlorate using TG and TG–DSC techniques. The kinetics of thermal decomposition of AP was evaluated using isothermal TG data by model fitting as well as isoconversional method. Display Omitted Highlights: ► Synthesis of ferrite nanostructures (∼20.0 nm) by wet-chemical method under different synthetic conditions. ► Characterization using XRD, FE-SEM, EDS, TEM, HRTEM and SAED pattern. ► Catalytic activity of ferrite nanostructures on AP thermal decomposition by thermal techniques. ► Burning rate measurements ofmore » CSPs with ferrite nanostructures. ► Kinetics of thermal decomposition of AP + nanoferrites. -- Abstract: In this paper, the nanoferrites of Mn, Co and Ni were synthesized by wet chemical method and characterized by X-ray diffraction (XRD), field emission scanning electron microscopy (FE-SEM), energy dispersive, X-ray spectra (EDS), transmission electron microscopy (TEM) and high resolution transmission electron microscopy (HR-TEM). It is catalytic activity were investigated on the thermal decomposition of ammonium perchlorate (AP) and composite solid propellants (CSPs) using thermogravimetry (TG), TG coupled with differential scanning calorimetry (TG–DSC) and ignition delay measurements. Kinetics of thermal decomposition of AP + nanoferrites have also been investigated using isoconversional and model fitting approaches which have been applied to data for isothermal TG decomposition. The burning rate of CSPs was considerably enhanced by these nanoferrites. Addition of nanoferrites to AP led to shifting of the high temperature decomposition peak toward lower temperature. All these studies reveal that ferrite nanorods show the best catalytic activity superior to that of nanospheres and nanocubes.« less
Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors
NASA Astrophysics Data System (ADS)
Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea
2018-03-01
In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.
X-Ray Thomson Scattering Without the Chihara Decomposition
NASA Astrophysics Data System (ADS)
Magyar, Rudolph; Baczewski, Andrew; Shulenburger, Luke; Hansen, Stephanie B.; Desjarlais, Michael P.; Sandia National Laboratories Collaboration
X-Ray Thomson Scattering is an important experimental technique used in dynamic compression experiments to measure the properties of warm dense matter. The fundamental property probed in these experiments is the electronic dynamic structure factor that is typically modeled using an empirical three-term decomposition (Chihara, J. Phys. F, 1987). One of the crucial assumptions of this decomposition is that the system's electrons can be either classified as bound to ions or free. This decomposition may not be accurate for materials in the warm dense regime. We present unambiguous first principles calculations of the dynamic structure factor independent of the Chihara decomposition that can be used to benchmark these assumptions. Results are generated using a finite-temperature real-time time-dependent density functional theory applied for the first time in these conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.
Characterization of agricultural land using singular value decomposition
NASA Astrophysics Data System (ADS)
Herries, Graham M.; Danaher, Sean; Selige, Thomas
1995-11-01
A method is defined and tested for the characterization of agricultural land from multi-spectral imagery, based on singular value decomposition (SVD) and key vector analysis. The SVD technique, which bears a close resemblance to multivariate statistic techniques, has previously been successfully applied to problems of signal extraction for marine data and forestry species classification. In this study the SVD technique is used as a classifier for agricultural regions, using airborne Daedalus ATM data, with 1 m resolution. The specific region chosen is an experimental research farm in Bavaria, Germany. This farm has a large number of crops, within a very small region and hence is not amenable to existing techniques. There are a number of other significant factors which render existing techniques such as the maximum likelihood algorithm less suitable for this area. These include a very dynamic terrain and tessellated pattern soil differences, which together cause large variations in the growth characteristics of the crops. The SVD technique is applied to this data set using a multi-stage classification approach, removing unwanted land-cover classes one step at a time. Typical classification accuracy's for SVD are of the order of 85-100%. Preliminary results indicate that it is a fast and efficient classifier with the ability to differentiate between crop types such as wheat, rye, potatoes and clover. The results of characterizing 3 sub-classes of Winter Wheat are also shown.
Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier
2017-02-15
The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques
Koopman Mode Analysis was newly applied to southern hemisphere sea ice concentration data. The resulting Koopman modes from analysis of both the...southern and northern hemisphere sea ice concentration data shows geographical regions where sea ice coverage has decreased over multiyear time scales.
NASA Astrophysics Data System (ADS)
Elbeih, Ahmed; Abd-Elghany, Mohamed; Elshenawy, Tamer
2017-03-01
Vacuum stability test (VST) is mainly used to study compatibility and stability of energetic materials. In this work, VST has been investigated to study thermal decomposition kinetics of four cyclic nitramines, 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), cis-1,3,4,6-tetranitrooctahydroimidazo-[4,5-d]imidazole (BCHMX), 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (ε-HNIW, CL-20), bonded by polyurethane matrix based on hydroxyl terminated polybutadiene (HTPB). Model fitting and model free (isoconversional) methods have been applied to determine the decomposition kinetics from VST results. For comparison, the decomposition kinetics were determined isothermally by ignition delay technique and non-isothermally by Advanced Kinetics and Technology Solution (AKTS) software. The activation energies for thermolysis obtained by isoconversional method based on VST technique of RDX/HTPB, HMX/HTPB, BCHMX/HTPB and CL20/HTPB were 157.1, 203.1, 190.0 and 176.8 kJ mol-1 respectively. Model fitting method proved that the mechanism of thermal decomposition of BCHMX/HTPB is controlled by the nucleation model while all the other studied PBXs are controlled by the diffusion models. A linear relationship between the ignition temperatures and the activation energies was observed. BCHMX/HTPB is interesting new PBX in the research stage.
NASA Astrophysics Data System (ADS)
Haris, A.; Pradana, G. S.; Riyanto, A.
2017-07-01
Tectonic setting of the Bird Head Papua Island becomes an important model for petroleum system in Eastern part of Indonesia. The current exploration has been started since the oil seepage finding in Bintuni and Salawati Basin. The biogenic gas in shallow layer turns out to become an interesting issue in the hydrocarbon exploration. The hydrocarbon accumulation appearance in a shallow layer with dry gas type, appeal biogenic gas for further research. This paper aims at delineating the sweet spot hydrocarbon potential in shallow layer by applying the spectral decomposition technique. The spectral decomposition is decomposing the seismic signal into an individual frequency, which has significant geological meaning. One of spectral decomposition methods is Continuous Wavelet Transform (CWT), which transforms the seismic signal into individual time and frequency simultaneously. This method is able to make easier time-frequency map analysis. When time resolution increases, the frequency resolution will be decreased, and vice versa. In this study, we perform low-frequency shadow zone analysis in which the amplitude anomaly at a low frequency of 15 Hz was observed and we then compare it to the amplitude at the mid (20 Hz) and the high-frequency (30 Hz). The appearance of the amplitude anomaly at a low frequency was disappeared at high frequency, this anomaly disappears. The spectral decomposition by using CWT algorithm has been successfully applied to delineate the sweet spot zone.
Integrated structure/control design - Present methodology and future opportunities
NASA Technical Reports Server (NTRS)
Weisshaar, T. A.; Newsom, J. R.; Zeiler, T. A.; Gilbert, M. G.
1986-01-01
Attention is given to current methodology applied to the integration of the optimal design process for structures and controls. Multilevel linear decomposition techniques proved to be most effective in organizing the computational efforts necessary for ISCD (integrated structures and control design) tasks. With the development of large orbiting space structures and actively controlled, high performance aircraft, there will be more situations in which this concept can be applied.
Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of a wastewater treatment facility along a river. Data was collected over 14-60 days, and several seasons. The power spectral densit...
Numeric Modified Adomian Decomposition Method for Power System Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth
This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B
2011-01-01
We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less
DeMAID/GA an Enhanced Design Manager's Aid for Intelligent Decomposition
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1996-01-01
Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial public release of DeMAID in 1989, much research has been done in the areas of decomposition, concurrent engineering, parallel processing, and process management; many new tools and techniques have emerged. Based on these recent research and development efforts, numerous enhancements have been added to DeMAID to further aid the design manager in saving both cost and time in a design cycle. The key enhancement, a genetic algorithm (GA), will be available in the next public release called DeMAID/GA. The GA sequences the design processes to minimize the cost and time in converging a solution. The major enhancements in the upgrade of DeMAID to DeMAID/GA are discussed in this paper. A sample conceptual design project is used to show how these enhancements can be applied to improve the design cycle.
Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of an oil and gas wastewater treatment facility along a river. Data was collected over 14-60 days. The power spectral density was us...
NASA Astrophysics Data System (ADS)
Jain, Shobhit; Tiso, Paolo; Haller, George
2018-06-01
We apply two recently formulated mathematical techniques, Slow-Fast Decomposition (SFD) and Spectral Submanifold (SSM) reduction, to a von Kármán beam with geometric nonlinearities and viscoelastic damping. SFD identifies a global slow manifold in the full system which attracts solutions at rates faster than typical rates within the manifold. An SSM, the smoothest nonlinear continuation of a linear modal subspace, is then used to further reduce the beam equations within the slow manifold. This two-stage, mathematically exact procedure results in a drastic reduction of the finite-element beam model to a one-degree-of freedom nonlinear oscillator. We also introduce the technique of spectral quotient analysis, which gives the number of modes relevant for reduction as output rather than input to the reduction process.
Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui
2017-12-01
Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.
Parallel pivoting combined with parallel reduction
NASA Technical Reports Server (NTRS)
Alaghband, Gita
1987-01-01
Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.
Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
ERIC Educational Resources Information Center
Barrera-Osorio, Felipe; Garcia-Moreno, Vicente; Patrinos, Harry Anthony; Porta, Emilio
2011-01-01
The Oaxaca-Blinder technique was originally used in labor economics to decompose earnings gaps and to estimate the level of discrimination. It has been applied since in other social issues, including education, where it can be used to assess how much of a gap is due to differences in characteristics (explained variation) and how much is due to…
Application of response surface techniques to helicopter rotor blade optimization procedure
NASA Technical Reports Server (NTRS)
Henderson, Joseph Lynn; Walsh, Joanne L.; Young, Katherine C.
1995-01-01
In multidisciplinary optimization problems, response surface techniques can be used to replace the complex analyses that define the objective function and/or constraints with simple functions, typically polynomials. In this work a response surface is applied to the design optimization of a helicopter rotor blade. In previous work, this problem has been formulated with a multilevel approach. Here, the response surface takes advantage of this decomposition and is used to replace the lower level, a structural optimization of the blade. Problems that were encountered and important considerations in applying the response surface are discussed. Preliminary results are also presented that illustrate the benefits of using the response surface.
NASA Astrophysics Data System (ADS)
Son, Youn-Suk; Kim, Ki-Joon; Kim, Ji-Yong; Kim, Jo-Chun
2010-12-01
We applied a hybrid technique to assess the decomposition characteristics of ethylbenzene and toluene that annexed the catalyst technique with existing electron beam (EB) technology. The removal efficiency of ethylbenzene in the EB-catalyst hybrid turned out to be 30% greater than that of EB-only treatment. We concluded that ethylbenzene was decomposed more easily than toluene by EB irradiation. We compared the independent effects of the EB-catalyst hybrid and catalyst-only methods, and observed that the efficiency of the EB-catalyst hybrid demonstrated approximately 6% improvement for decomposing toluene and 20% improvement for decomposing ethylbenzene. The G-values for ethylbenzene increased with initial concentration and reactor type: for example, the G-values by reactor type at 2800 ppmC were 7.5-10.9 (EB-only) and 12.9-25.7 (EB-catalyst hybrid). We also observed a significant decrease in by-products as well as in the removal efficiencies associated with the EB-catalyst hybrid technique.
NASA Astrophysics Data System (ADS)
Murat, M.
2017-12-01
Color-blended frequency decomposition is a seismic attribute that can be used to educe or draw out and visualize geomorphological features enabling a better understanding of reservoir architecture and connectivity for both exploration and field development planning. Color-blended frequency decomposition was applied to seismic data in several areas of interest in the Deepwater Gulf of Mexico. The objective was stratigraphic characterization to better define reservoir extent, highlight depositional features, identify thicker reservoir zones and examine potential connectivity issues due to stratigraphic variability. Frequency decomposition is a technique to analyze changes in seismic frequency caused by changes in the reservoir thickness, lithology and fluid content. This technique decomposes or separates the seismic frequency spectra into discrete bands of frequency limited seismic data using digital filters. The workflow consists of frequency (spectral) decomposition, RGB color blending of three frequency slices, and horizon or stratal slicing of the color blended frequency data for interpretation. Patterns were visualized and identified in the data that were not obvious on standard stacked seismic sections. These seismic patterns were interpreted and compared to known geomorphological patterns and their environment of deposition. From this we inferred the distribution of potential reservoir sand versus non-reservoir shale and even finer scale details such as the overall direction of the sediment transport and relative thickness. In exploratory areas, stratigraphic characterization from spectral decomposition is used for prospect risking and well planning. Where well control exists, we can validate the seismic observations and our interpretation and use the stratigraphic/geomorphological information to better inform decisions on the need for and placement of development wells.
Decomposability and scalability in space-based observatory scheduling
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.
1992-01-01
In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.
Low-dimensional and Data Fusion Techniques Applied to a Rectangular Supersonic Multi-stream Jet
NASA Astrophysics Data System (ADS)
Berry, Matthew; Stack, Cory; Magstadt, Andrew; Ali, Mohd; Gaitonde, Datta; Glauser, Mark
2017-11-01
Low-dimensional models of experimental and simulation data for a complex supersonic jet were fused to reconstruct time-dependent proper orthogonal decomposition (POD) coefficients. The jet consists of a multi-stream rectangular single expansion ramp nozzle, containing a core stream operating at Mj , 1 = 1.6 , and bypass stream at Mj , 3 = 1.0 with an underlying deck. POD was applied to schlieren and PIV data to acquire the spatial basis functions. These eigenfunctions were projected onto their corresponding time-dependent large eddy simulation (LES) fields to reconstruct the temporal POD coefficients. This reconstruction was able to resolve spectral peaks that were previously aliased due to the slower sampling rates of the experiments. Additionally, dynamic mode decomposition (DMD) was applied to the experimental and LES datasets, and the spatio-temporal characteristics were compared to POD. The authors would like to acknowledge AFOSR, program manager Dr. Doug Smith, for funding this research, Grant No. FA9550-15-1-0435.
Yongqiang Liu
2003-01-01
The relations between monthly-seasonal soil moisture and precipitation variability are investigated by identifying the coupled patterns of the two hydrological fields using singular value decomposition (SVD). SVD is a technique of principal component analysis similar to empirical orthogonal knctions (EOF). However, it is applied to two variables simultaneously and is...
NASA Astrophysics Data System (ADS)
Pandey, Rishi Kumar; Mishra, Hradyesh Kumar
2017-11-01
In this paper, the semi-analytic numerical technique for the solution of time-space fractional telegraph equation is applied. This numerical technique is based on coupling of the homotopy analysis method and sumudu transform. It shows the clear advantage with mess methods like finite difference method and also with polynomial methods similar to perturbation and Adomian decomposition methods. It is easily transform the complex fractional order derivatives in simple time domain and interpret the results in same meaning.
Characterising laser beams with liquid crystal displays
NASA Astrophysics Data System (ADS)
Dudley, Angela; Naidoo, Darryl; Forbes, Andrew
2016-02-01
We show how one can determine the various properties of light, from the modal content of laser beams to decoding the information stored in optical fields carrying orbital angular momentum, by performing a modal decomposition. Although the modal decomposition of light has been known for a long time, applied mostly to pattern recognition, we illustrate how this technique can be implemented with the use of liquid-crystal displays. We show experimentally how liquid crystal displays can be used to infer the intensity, phase, wavefront, Poynting vector, and orbital angular momentum density of unknown optical fields. This measurement technique makes use of a single spatial light modulator (liquid crystal display), a Fourier transforming lens and detector (CCD or photo-diode). Such a diagnostic tool is extremely relevant to the real-time analysis of solid-state and fibre laser systems as well as mode division multiplexing as an emerging technology in optical communication.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
Automatic single-image-based rain streaks removal via image decomposition.
Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang
2012-04-01
Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.
Image compression using singular value decomposition
NASA Astrophysics Data System (ADS)
Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.
2017-11-01
We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.
Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2013-04-01
We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.
Coating for components requiring hydrogen peroxide compatibility
NASA Technical Reports Server (NTRS)
Yousefiani, Ali (Inventor)
2010-01-01
The present invention provides a heretofore-unknown use for zirconium nitride as a hydrogen peroxide compatible protective coating that was discovered to be useful to protect components that catalyze the decomposition of hydrogen peroxide or corrode when exposed to hydrogen peroxide. A zirconium nitride coating of the invention may be applied to a variety of substrates (e.g., metals) using art-recognized techniques, such as plasma vapor deposition. The present invention further provides components and articles of manufacture having hydrogen peroxide compatibility, particularly components for use in aerospace and industrial manufacturing applications. The zirconium nitride barrier coating of the invention provides protection from corrosion by reaction with hydrogen peroxide, as well as prevention of hydrogen peroxide decomposition.
Fast modal decomposition for optical fibers using digital holography.
Lyu, Meng; Lin, Zhiquan; Li, Guowei; Situ, Guohai
2017-07-26
Eigenmode decomposition of the light field at the output end of optical fibers can provide fundamental insights into the nature of electromagnetic-wave propagation through the fibers. Here we present a fast and complete modal decomposition technique for step-index optical fibers. The proposed technique employs digital holography to measure the light field at the output end of the multimode optical fiber, and utilizes the modal orthonormal property of the basis modes to calculate the modal coefficients of each mode. Optical experiments were carried out to demonstrate the proposed decomposition technique, showing that this approach is fast, accurate and cost-effective.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
How to Compute the Partial Fraction Decomposition without Really Trying
ERIC Educational Resources Information Center
Brazier, Richard; Boman, Eugene
2007-01-01
For various reasons there has been a recent trend in college and high school calculus courses to de-emphasize teaching the Partial Fraction Decomposition (PFD) as an integration technique. This is regrettable because the Partial Fraction Decomposition is considerably more than an integration technique. It is, in fact, a general purpose tool which…
Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques
2018-04-30
Title: Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques Subject: Monthly Progress Report Period of...Resources: N/A TOTAL: $18,687 2 TECHNICAL STATUS REPORT Abstract The program goal is analysis of sea ice dynamical behavior using Koopman Mode Decompo...sition (KMD) techniques. The work in the program’s first month consisted of improvements to data processing code, inclusion of additional arctic sea ice
Le, Huy Q.; Molloi, Sabee
2011-01-01
Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg∕ml) and iodine (4, 12, 20, 28, 36, and 44 mg∕ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30∕70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg∕ml) and iodine (5, 15, 25, 35, and 45 mg∕ml). The x-ray transport process was simulated where the Beer–Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine. PMID:21361193
NASA Astrophysics Data System (ADS)
Cicone, A.; Zhou, H.; Piersanti, M.; Materassi, M.; Spogli, L.
2017-12-01
Nonlinear and nonstationary signals are ubiquitous in real life. Their decomposition and analysis is of crucial importance in many research fields. Traditional techniques, like Fourier and wavelet Transform have been proved to be limited in this context. In the last two decades new kind of nonlinear methods have been developed which are able to unravel hidden features of these kinds of signals. In this poster we present a new method, called Adaptive Local Iterative Filtering (ALIF). This technique, originally developed to study mono-dimensional signals, unlike any other algorithm proposed so far, can be easily generalized to study two or higher dimensional signals. Furthermore, unlike most of the similar methods, it does not require any a priori assumption on the signal itself, so that the technique can be applied as it is to any kind of signals. Applications of ALIF algorithm to real life signals analysis will be presented. Like, for instance, the behavior of the water level near the coastline in presence of a Tsunami, length of the day signal, pressure measured at ground level on a global grid, radio power scintillation from GNSS signals,
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
Shoot litter breakdown and zinc dynamics of an aquatic plant, Schoenoplectus californicus.
Arreghini, Silvana; de Cabo, Laura; Serafini, Roberto José María; Fabrizio de Iorio, Alicia
2018-07-03
Decomposition of plant debris is an important process in determining the structure and function of aquatic ecosystems. The aims were to find a mathematic model fitting the decomposition process of Schoenoplectus californicus shoots containing different Zn concentrations; compare the decomposition rates; and assess metal accumulation/mobilization during decomposition. A litterbag technique was applied with shoots containing three levels of Zn: collected from an unpolluted river (RIV) and from experimental populations at low (LoZn) and high (HiZn) Zn supply. The double exponential model explained S. californicus shoot decomposition, at first, higher initial proportion of refractory fraction in RIV detritus determined a lower decay rate and until 68 days, RIV and LoZn detritus behaved like a source of metal, releasing soluble/weakly bound zinc into the water; after 68 days, they became like a sink. However, HiZn detritus showed rapid release into the water during the first 8 days, changing to the sink condition up to 68 days, and then returning to the source condition up to 369 days. The knowledge of the role of detritus (sink/source) will allow defining a correct management of the vegetation used for zinc removal and providing a valuable tool for environmental remediation and rehabilitation planning.
Shahid, Muhammad; Xue, Xinkai; Fan, Chao; Ninham, Barry W; Pashley, Richard M
2015-06-25
An enhanced thermal decomposition of chemical compounds in aqueous solution has been achieved at reduced solution temperatures. The technique exploits hitherto unrecognized properties of a bubble column evaporator (BCE). It offers better heat transfer efficiency than conventional heat transfer equipment. This is obtained via a continuous flow of hot, dry air bubbles of optimal (1-3 mm) size. Optimal bubble size is maintained by using the bubble coalescence inhibition property of some salts. This novel method is illustrated by a study of thermal decomposition of ammonium bicarbonate (NH4HCO3) and potassium persulfate (K2S2O8) in aqueous solutions. The decomposition occurs at significantly lower temperatures than those needed in bulk solution. The process appears to work via the continuous production of hot (e.g., 150 °C) dry air bubbles, which do not heat the solution significantly but produce a transient hot surface layer around each rising bubble. This causes the thermal decomposition of the solute. The decomposition occurs due to the effective collision of the solute with the surface of the hot bubbles. The new process could, for example, be applied to the regeneration of the ammonium bicarbonate draw solution used in forward osmosis.
A Tensor-Train accelerated solver for integral equations in complex geometries
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Rahimian, Abtin; Zorin, Denis
2017-04-01
We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest O (log N) and once the inverse is computed, it can be applied in O (Nlog N) . We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries.
2006-04-21
C. M., and Prendergast, J. P., 2002, "Thermial Analysis of Hypersonic Inlet Flow with Exergy -Based Design Methods," International Journal of Applied...parametric study of the PS and its components is first presented in order to show the type of detailed information on internal system losses which an exergy ...Thermoeconomic Isolation Applied to the Optimal Synthesis/Design of an Advanced Fighter Aircraft System," International Journal of Thermodynamics, ICAT
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Biondi, M; Vanzi, E; De Otto, G; Banci Buonamici, F; Belmonte, G M; Mazzoni, L N; Guasti, A; Carbone, S F; Mazzei, M A; La Penna, A; Foderà, E; Guerreri, D; Maiolino, A; Volterrani, L
2016-12-01
Many studies aimed at validating the application of Dual Energy Computed Tomography (DECT) in clinical practice where conventional CT is not exhaustive. An example is given by bone marrow oedema detection, in which DECT based on water/calcium (W/Ca) decomposition was applied. In this paper a new DECT approach, based on water/cortical bone (W/CB) decomposition, was investigated. Eight patients suffering from marrow oedema were scanned with MRI and DECT. Two-materials density decomposition was performed in ROIs corresponding to normal bone marrow and oedema. These regions were drawn on DECT images using MRI informations. Both W/Ca and W/CB were considered as material basis. Scatter plots of W/Ca and W/CB concentrations were made for each ROI in order to evaluate if oedema could be distinguished from normal bone marrow. Thresholds were defined on the scatter plots in order to produce DECT images where oedema regions were highlighted through color maps. The agreement between these images and MR was scored by two expert radiologists. For all the patients, the best scores were obtained using W/CB density decomposition. In all cases, DECT color map images based on W/CB decomposition showed better agreement with MR in bone marrow oedema identification with respect to W/Ca decomposition. This result encourages further studies in order to evaluate if DECT based on W/CB decomposition could be an alternative technique to MR, which would be important when short scanning duration is relevant, as in the case of aged or traumatic patients. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Multiple-component Decomposition from Millimeter Single-channel Data
NASA Astrophysics Data System (ADS)
Rodríguez-Montoya, Iván; Sánchez-Argüelles, David; Aretxaga, Itziar; Bertone, Emanuele; Chávez-Dagostino, Miguel; Hughes, David H.; Montaña, Alfredo; Wilson, Grant W.; Zeballos, Milagros
2018-03-01
We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data, we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the Aztronomical Thermal Emission Camera/ASTE survey of the Great Observatories Origins Deep Survey–South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise ratio, and minimizing information loss. In particular, GOODS-S is decomposed into four independent physical components: one of them is the already-known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.
Torres, Ana M; Lopez, Jose J; Pueo, Basilio; Cobos, Maximo
2013-04-01
Plane-wave decomposition (PWD) methods using microphone arrays have been shown to be a very useful tool within the applied acoustics community for their multiple applications in room acoustics analysis and synthesis. While many theoretical aspects of PWD have been previously addressed in the literature, the practical advantages of the PWD method to assess the acoustic behavior of real rooms have been barely explored so far. In this paper, the PWD method is employed to analyze the sound field inside a selected set of real rooms having a well-defined purpose. To this end, a circular microphone array is used to capture and process a number of impulse responses at different spatial positions, providing angle-dependent data for both direct and reflected wavefronts. The detection of reflected plane waves is performed by means of image processing techniques applied over the raw array response data and over the PWD data, showing the usefulness of image-processing-based methods for room acoustics analysis.
Restrepo-Agudelo, Sebastian; Roldan-Vasco, Sebastian; Ramirez-Arbelaez, Lina; Cadavid-Arboleda, Santiago; Perez-Giraldo, Estefania; Orozco-Duque, Andres
2017-08-01
The visual inspection is a widely used method for evaluating the surface electromyographic signal (sEMG) during deglutition, a process highly dependent of the examiners expertise. It is desirable to have a less subjective and automated technique to improve the onset detection in swallowing related muscles, which have a low signal-to-noise ratio. In this work, we acquired sEMG measured in infrahyoid muscles with high baseline noise of ten healthy adults during water swallowing tasks. Two methods were applied to find the combination of cutoff frequencies that achieve the most accurate onset detection: discrete wavelet decomposition based method and fixed steps variations of low and high cutoff frequencies of a digital bandpass filter. Teager-Kaiser Energy operator, root mean square and simple threshold method were applied for both techniques. Results show a narrowing of the effective bandwidth vs. the literature recommended parameters for sEMG acquisition. Both level 3 decomposition with mother wavelet db4 and bandpass filter with cutoff frequencies between 130 and 180Hz were optimal for onset detection in infrahyoid muscles. The proposed methodologies recognized the onset time with predictive power above 0.95, that is similar to previous findings but in larger and more superficial muscles in limbs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Li, Yu-Hua; Cheng, Su-Wen; Yuan, Chung-Shin; Lai, Tzu-Fan; Hung, Chung-Hsuang
2018-06-05
Chinese cooking fume is one of the sources of volatile organic compounds (VOCs) in the air. An innovative control technology combining photocatalytic degradation and ozone oxidation (UV/TiO 2 +O 3 ) was developed to decompose VOCs in the cooking fume. Fiberglass filter (FGF) coated with TiO 2 was prepared by an impregnation procedure. A continuous-flow reaction system was self-designed by combining photocatalysis with advanced ozone oxidation technique. By passing the simulated cooking fume through the FGF, the VOC decomposition efficiency in the cooking fume could be increased by about 10%. The decomposition efficiency of VOCs in the cooking fume increased and then decreased with the inlet VOC concentration. A maximum VOC decomposition efficiency of 64% was obtained at 100 ppm. Similar trend was observed for reaction temperature with the VOC decomposition efficiencies ranging from 64 to 68%. Moreover, inlet ozone concentration had a positive effect on the decomposition of VOCs in the cooking fume for inlet ozone≤1000 ppm and leveled off for inlet ozone>1000 ppm. 34% of VOC decomposition efficiency was achieved solely by ozone oxidation with or without near-UV irradiation. A maximum of 75% and 94% VOC decomposition efficiency could be achieved by O 3 +UV/TiO 2 and UV/TiO 2 +O 3 techniques, respectively. The maximum decomposition efficiencies of VOCs decreased to 79% for using UV/TiO 2 +O 3 technique with adding water in the oil fume. Comparing the chromatographical species of VOCs in the oil fume before and after the decomposition of VOCs by using UV/TiO 2 +O 3 technique, we found that both TVOC and VOC species in the oil fume were effectively decomposed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques
DOT National Transportation Integrated Search
1995-01-01
Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...
Through-wall image enhancement using fuzzy and QR decomposition.
Riaz, Muhammad Mohsin; Ghafoor, Abdul
2014-01-01
QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.
Teodoro, Douglas; Lovis, Christian
2013-01-01
Background Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. Objective To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. Methods We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. Results The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. Conclusion The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends. PMID:23637796
Double-Resonance Facilitated Decomposion of Emission Spectra
NASA Astrophysics Data System (ADS)
Kato, Ryota; Ishikawa, Haruki
2016-06-01
Emission spectra provide us with rich information about the excited-state processes such as proton-transfer, charge-transfer and so on. In the cases that more than one excited states are involved, emission spectra from different excited states sometimes overlap and a decomposition of the overlapped spectra is desired. One of the methods to perform a decomposition is a time-resolved fluorescence technique. It uses a difference in time evolutions of components involved. However, in the gas-phase, a concentration of the sample is frequently too small to carry out this method. On the other hand, double-resonance technique is a very powerful tool to discriminate or identify a common species in the spectra in the gas-phase. Thus, in the present study, we applied the double-resonance technique to resolve the overlapped emission spectra. When transient IR absorption spectra of the excited state are available, we can label the population of the certain species by the IR excitation with a proper selection of the IR wavenumbers. Thus, we can obtain the emission spectra of labeled species by subtracting the emission spectra with IR labeling from that without IR. In the present study, we chose the charge-transfer emission spectra of cyanophenyldisilane (CPDS) as a test system. One of us reported that two charge-transfer (CT) states are involved in the intramolecular charge-transfer (ICT) process of CPDS-water cluster and recorded the transient IR spectra. As expected, we have succeeded in resolving the CT emission spectra of CPDS-water cluster by the double resonance facilitated decomposion technique. In the present paper, we will report the details of the experimental scheme and the results of the decomposition of the emission spectra. H. Ishikawa, et al., Chem. Phys. Phys. Chem., 9, 117 (2007).
Daily water level forecasting using wavelet decomposition and artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.
2015-01-01
Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.
Adserias-Garriga, Joe; Hernández, Marta; Quijada, Narciso M; Rodríguez Lázaro, David; Steadman, Dawnie; Garcia-Gil, Jesús
2017-09-01
Understanding human decomposition is critical for its use in postmortem interval (PMI) estimation, having a significant impact on forensic investigations. In recognition of the need to establish the scientific basis for PMI estimation, several studies on decomposition have been carried out in the last years. The aims of the present study were: (i) to identify soil microbiota communities involved in human decomposition through high-throughput sequencing (HTS) of DNA sequences from the different bacteria, (ii) to monitor quantitatively and qualitatively the decay of such signature species, and (iii) to describe succesional changes in bacterial populations from the early putrefaction state until skeletonization. Three donated individuals to the University of Tennessee FAC were studied. Soil samples around the body were taken from the placement of the donor until advanced decay/dry remains stage. Bacterial DNA extracts were obtained from the samples, HTS techniques were applied and bioinformatic data analysis was performed. The three cadavers showed similar overall successional changes. At the beginning of the decomposition process the soil microbiome consisted of diverse indigenous soil bacterial communities. As decomposition advanced, Firmicutes community abundance increased in the soil during the bloat stage. The growth curve of Firmicutes from human remains can be used to estimate time since death during Tennessee summer conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluating the performance of distributed approaches for modal identification
NASA Astrophysics Data System (ADS)
Krishnan, Sriram S.; Sun, Zhuoxiong; Irfanoglu, Ayhan; Dyke, Shirley J.; Yan, Guirong
2011-04-01
In this paper two modal identification approaches appropriate for use in a distributed computing environment are applied to a full-scale, complex structure. The natural excitation technique (NExT) is used in conjunction with a condensed eigensystem realization algorithm (ERA), and the frequency domain decomposition with peak-picking (FDD-PP) are both applied to sensor data acquired from a 57.5-ft, 10 bay highway sign truss structure. Monte-Carlo simulations are performed on a numerical example to investigate the statistical properties and sensitivity to noise of the two distributed algorithms. Experimental results are provided and discussed.
NASA Astrophysics Data System (ADS)
Cicone, Antonio; Zhou, Haomin; Piersanti, Mirko; Materassi, Massimo; Spogli, Luca
2017-04-01
Nonlinear and nonstationary signals are ubiquitous in real life. Their decomposition and analysis is of crucial importance in many research fields. Traditional techniques, like Fourier and wavelet Transform have been proved to be limited in this context. In the last two decades new kind of nonlinear methods have been developed which are able to unravel hidden features of these kinds of signals. In this talk we will review the state of the art and present a new method, called Adaptive Local Iterative Filtering (ALIF). This method, developed originally to study mono-dimensional signals, unlike any other technique proposed so far, can be easily generalized to study two or higher dimensional signals. Furthermore, unlike most of the similar methods, it does not require any a priori assumption on the signal itself, so that the method can be applied as it is to any kind of signals. Applications of ALIF algorithm to real life signals analysis will be presented. Like, for instance, the behavior of the water level near the coastline in presence of a Tsunami, the length of the day signal, the temperature and pressure measured at ground level on a global grid, and the radio power scintillation from GNSS signals.
Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines
NASA Astrophysics Data System (ADS)
Singh, Dheeraj Sharan; Zhao, Qing
2016-12-01
This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.
The Benefits of Using Time-Frequency Analysis with Synthetic Aperture Focusing Technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albright, Austin P; Clayton, Dwight A
2015-01-01
Improvements in detection and resolution are always desired and needed. There are various instruments available for the inspection of concrete structures that can be used with confidence for detecting different defects. However, more often than not that confidence is heavily dependent on the experience of the operator rather than the clear, objective discernibility of the output of the instrument. The challenge of objective discernment is amplified when the concrete structures contain multiple layers of reinforcement, are of significant thickness, or both, such as concrete structures in nuclear power plants. We seek to improve and extend the usefulness of results producedmore » using the synthetic aperture focusing technique (SAFT) on data collected from thick, complex concrete structures. A secondary goal is to improve existing SAFT results, with regards to repeatedly and objectively identifying defects and/or internal structure of concrete structures. Towards these goals, we are applying the time-frequency technique of wavelet packet decomposition and reconstruction using a mother wavelet that possesses the exact reconstruction property. However, instead of analyzing the coefficients of each decomposition node, we select and reconstruct specific nodes based on the frequency band it contains to produce a frequency band specific time-series representation. SAFT is then applied to these frequency specific reconstructions allowing SAFT to be used to visualize the reflectivity of a frequency band and that band s interaction with the contents of the concrete structure. We apply our technique to data sets collected using a commercial, ultrasonic linear array (MIRA) from two 1.5m x 2m x 25cm concrete test specimens. One specimen contains multiple layers of rebar. The other contains honeycomb, crack, and rebar bonding defect analogs. This approach opens up a multitude of possibilities for improved detection, readability, and overall improved objectivity. We will focus on improved defect/reinforcement isolation in thick and multilayered reinforcement environments. Additionally, the ability to empirically explore the possibility of a frequency-band-defect-type relationship or sensitivity becomes available.« less
The benefits of using time-frequency analysis with synthetic aperture focusing technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albright, Austin, E-mail: albrightap@ornl.gov, E-mail: claytonda@ornl.gov; Clayton, Dwight, E-mail: albrightap@ornl.gov, E-mail: claytonda@ornl.gov
2015-03-31
Improvements in detection and resolution are always desired and needed. There are various instruments available for the inspection of concrete structures that can be used with confidence for detecting different defects. However, more often than not that confidence is heavily dependent on the experience of the operator rather than the clear, objective discernibility of the output of the instrument. The challenge of objective discernment is amplified when the concrete structures contain multiple layers of reinforcement, are of significant thickness, or both, such as concrete structures in nuclear power plants. We seek to improve and extend the usefulness of results producedmore » using the synthetic aperture focusing technique (SAFT) on data collected from thick, complex concrete structures. A secondary goal is to improve existing SAFT results, with regards to repeatedly and objectively identifying defects and/or internal structure of concrete structures. Towards these goals, we are applying the time-frequency technique of wavelet packet decomposition and reconstruction using a mother wavelet that possesses the exact reconstruction property. However, instead of analyzing the coefficients of each decomposition node, we select and reconstruct specific nodes based on the frequency band it contains to produce a frequency band specific time-series representation. SAFT is then applied to these frequency specific reconstructions allowing SAFT to be used to visualize the reflectivity of a frequency band and that band's interaction with the contents of the concrete structure. We apply our technique to data sets collected using a commercial, ultrasonic linear array (MIRA) from two 1.5m × 2m × 25cm concrete test specimens. One specimen contains multiple layers of rebar. The other contains honeycomb, crack, and rebar bonding defect analogs. This approach opens up a multitude of possibilities for improved detection, readability, and overall improved objectivity. We will focus on improved defect/reinforcement isolation in thick and multilayered reinforcement environments. Additionally, the ability to empirically explore the possibility of a frequency-band-defect-type relationship or sensitivity becomes available.« less
The benefits of using time-frequency analysis with synthetic aperture focusing technique
NASA Astrophysics Data System (ADS)
Albright, Austin; Clayton, Dwight
2015-03-01
Improvements in detection and resolution are always desired and needed. There are various instruments available for the inspection of concrete structures that can be used with confidence for detecting different defects. However, more often than not that confidence is heavily dependent on the experience of the operator rather than the clear, objective discernibility of the output of the instrument. The challenge of objective discernment is amplified when the concrete structures contain multiple layers of reinforcement, are of significant thickness, or both, such as concrete structures in nuclear power plants. We seek to improve and extend the usefulness of results produced using the synthetic aperture focusing technique (SAFT) on data collected from thick, complex concrete structures. A secondary goal is to improve existing SAFT results, with regards to repeatedly and objectively identifying defects and/or internal structure of concrete structures. Towards these goals, we are applying the time-frequency technique of wavelet packet decomposition and reconstruction using a mother wavelet that possesses the exact reconstruction property. However, instead of analyzing the coefficients of each decomposition node, we select and reconstruct specific nodes based on the frequency band it contains to produce a frequency band specific time-series representation. SAFT is then applied to these frequency specific reconstructions allowing SAFT to be used to visualize the reflectivity of a frequency band and that band's interaction with the contents of the concrete structure. We apply our technique to data sets collected using a commercial, ultrasonic linear array (MIRA) from two 1.5m × 2m × 25cm concrete test specimens. One specimen contains multiple layers of rebar. The other contains honeycomb, crack, and rebar bonding defect analogs. This approach opens up a multitude of possibilities for improved detection, readability, and overall improved objectivity. We will focus on improved defect/reinforcement isolation in thick and multilayered reinforcement environments. Additionally, the ability to empirically explore the possibility of a frequency-band-defect-type relationship or sensitivity becomes available.
Exploring Ultrafast Structural Dynamics for Energetic Enhancement or Disruption
2016-03-01
it. In a pump -push/ dump probe experiment, a secondary laser pulse (push/ dump ) is used after the initial perturbation due to the pump pulse. The...increased. The pump -push/ dump probe technique is a difficult experiment that requires a highly stable laser source. Ultrafast pump -probe experiments...decomposition of solids. Journal of Applied Physics. 2001;89:4156–4166. 17. Kee TW. Femtosecond pump -push-probe and pump - dump -probe spectroscopy of
Kumar, Nitin; Radin, Maxwell D.; Wood, Brandon C.; ...
2015-04-13
A viable Li/O 2 battery will require the development of stable electrolytes that do not continuously decompose during cell operation. In some recent experiments it is suggested that reactions occurring at the interface between the liquid electrolyte and the solid lithium peroxide (Li 2O 2) discharge phase are a major contributor to these instabilities. To clarify the mechanisms associated with these reactions, a variety of atomistic simulation techniques, classical Monte Carlo, van der Waals-augmented density functional theory, ab initio molecular dynamics, and various solvation models, are used to study the initial decomposition of the common electrolyte solvent, dimethoxyethane (DME), onmore » surfaces of Li 2O 2. Comparisons are made between the two predominant Li 2O 2 surface charge states by calculating decomposition pathways on peroxide-terminated (O 2 2–) and superoxide-terminated (O 2 1–) facets. For both terminations, DME decomposition proceeds exothermically via a two-step process comprised of hydrogen abstraction (H-abstraction) followed by nucleophilic attack. In the first step, abstracted H dissociates a surface O 2 dimer, and combines with a dissociated oxygen to form a hydroxide ion (OH –). In the remaining surface oxygen then attacks the DME, resulting in a DME fragment that is strongly bound to the Li 2O 2 surface. DME decomposition is predicted to be more exothermic on the peroxide facet; nevertheless, the rate of DME decomposition is faster on the superoxide termination. The impact of solvation (explicit vs implicit) and an applied electric field on the reaction energetics are investigated. Finally, our calculations suggest that surface-mediated electrolyte decomposition should out-pace liquid-phase processes such as solvent auto-oxidation by dissolved O 2.« less
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael
2017-12-01
In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.
Koch, Ina; Nöthen, Joachim; Schleiff, Enrico
2017-01-01
Motivation: Arabidopsis thaliana is a well-established model system for the analysis of the basic physiological and metabolic pathways of plants. Nevertheless, the system is not yet fully understood, although many mechanisms are described, and information for many processes exists. However, the combination and interpretation of the large amount of biological data remain a big challenge, not only because data sets for metabolic paths are still incomplete. Moreover, they are often inconsistent, because they are coming from different experiments of various scales, regarding, for example, accuracy and/or significance. Here, theoretical modeling is powerful to formulate hypotheses for pathways and the dynamics of the metabolism, even if the biological data are incomplete. To develop reliable mathematical models they have to be proven for consistency. This is still a challenging task because many verification techniques fail already for middle-sized models. Consequently, new methods, like decomposition methods or reduction approaches, are developed to circumvent this problem. Methods: We present a new semi-quantitative mathematical model of the metabolism of Arabidopsis thaliana . We used the Petri net formalism to express the complex reaction system in a mathematically unique manner. To verify the model for correctness and consistency we applied concepts of network decomposition and network reduction such as transition invariants, common transition pairs, and invariant transition pairs. Results: We formulated the core metabolism of Arabidopsis thaliana based on recent knowledge from literature, including the Calvin cycle, glycolysis and citric acid cycle, glyoxylate cycle, urea cycle, sucrose synthesis, and the starch metabolism. By applying network decomposition and reduction techniques at steady-state conditions, we suggest a straightforward mathematical modeling process. We demonstrate that potential steady-state pathways exist, which provide the fixed carbon to nearly all parts of the network, especially to the citric acid cycle. There is a close cooperation of important metabolic pathways, e.g., the de novo synthesis of uridine-5-monophosphate, the γ-aminobutyric acid shunt, and the urea cycle. The presented approach extends the established methods for a feasible interpretation of biological network models, in particular of large and complex models.
Koch, Ina; Nöthen, Joachim; Schleiff, Enrico
2017-01-01
Motivation: Arabidopsis thaliana is a well-established model system for the analysis of the basic physiological and metabolic pathways of plants. Nevertheless, the system is not yet fully understood, although many mechanisms are described, and information for many processes exists. However, the combination and interpretation of the large amount of biological data remain a big challenge, not only because data sets for metabolic paths are still incomplete. Moreover, they are often inconsistent, because they are coming from different experiments of various scales, regarding, for example, accuracy and/or significance. Here, theoretical modeling is powerful to formulate hypotheses for pathways and the dynamics of the metabolism, even if the biological data are incomplete. To develop reliable mathematical models they have to be proven for consistency. This is still a challenging task because many verification techniques fail already for middle-sized models. Consequently, new methods, like decomposition methods or reduction approaches, are developed to circumvent this problem. Methods: We present a new semi-quantitative mathematical model of the metabolism of Arabidopsis thaliana. We used the Petri net formalism to express the complex reaction system in a mathematically unique manner. To verify the model for correctness and consistency we applied concepts of network decomposition and network reduction such as transition invariants, common transition pairs, and invariant transition pairs. Results: We formulated the core metabolism of Arabidopsis thaliana based on recent knowledge from literature, including the Calvin cycle, glycolysis and citric acid cycle, glyoxylate cycle, urea cycle, sucrose synthesis, and the starch metabolism. By applying network decomposition and reduction techniques at steady-state conditions, we suggest a straightforward mathematical modeling process. We demonstrate that potential steady-state pathways exist, which provide the fixed carbon to nearly all parts of the network, especially to the citric acid cycle. There is a close cooperation of important metabolic pathways, e.g., the de novo synthesis of uridine-5-monophosphate, the γ-aminobutyric acid shunt, and the urea cycle. The presented approach extends the established methods for a feasible interpretation of biological network models, in particular of large and complex models. PMID:28713420
Molloi, Sabee; Ding, Huanjun; Feig, Stephen
2015-01-01
Purpose The purpose of this study was to compare the precision of mammographic breast density measurement using radiologist reader assessment, histogram threshold segmentation, fuzzy C-mean segmentation and spectral material decomposition. Materials and Methods Spectral mammography images from a total of 92 consecutive asymptomatic women (50–69 years old) who presented for annual screening mammography were retrospectively analyzed for this study. Breast density was estimated using 10 radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and spectral material decomposition. The breast density correlation between left and right breasts was used to assess the precision of these techniques to measure breast composition relative to dual-energy material decomposition. Results In comparison to the other techniques, the results of breast density measurements using dual-energy material decomposition showed the highest correlation. The relative standard error of estimate for breast density measurements from left and right breasts using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and dual-energy material decomposition was calculated to be 1.95, 2.87, 2.07 and 1.00, respectively. Conclusion The results indicate that the precision of dual-energy material decomposition was approximately factor of two higher than the other techniques with regard to better correlation of breast density measurements from right and left breasts. PMID:26031229
Steganography based on pixel intensity value decomposition
NASA Astrophysics Data System (ADS)
Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.
2014-05-01
This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.
Sparse Gaussian elimination with controlled fill-in on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Alaghband, Gita; Jordan, Harry F.
1989-01-01
It is shown that in sparse matrices arising from electronic circuits, it is possible to do computations on many diagonal elements simultaneously. A technique for obtaining an ordered compatible set directly from the ordered incompatible table is given. The ordering is based on the Markowitz number of the pivot candidates. This technique generates a set of compatible pivots with the property of generating few fills. A novel heuristic algorithm is presented that combines the idea of an order-compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. An elimination set for reducing the matrix is generated and selected on the basis of a minimum Markowitz sum number. The parallel pivoting technique presented is a stepwise algorithm and can be applied to any submatrix of the original matrix. Thus, it is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices using the HEP multiprocessor (Kowalik, 1985) are presented and analyzed.
Biney, Paul O; Gyamerah, Michael; Shen, Jiacheng; Menezes, Bruna
2015-03-01
A new multi-stage kinetic model has been developed for TGA pyrolysis of arundo, corn stover, sawdust and switch grass that accounts for the initial biomass weight (W0). The biomass were decomposed in a nitrogen atmosphere from 23°C to 900°C in a TGA at a single 20°C/min ramp rate in contrast with the isoconversion technique. The decomposition was divided into multiple stages based on the absolute local minimum values of conversion derivative, (dx/dT), obtained from DTG curves. This resulted in three decomposition stages for arundo, corn stover and sawdust and four stages for switch grass. A linearized multi-stage model was applied to the TGA data for each stage to determine the pre-exponential factor, activation energy, and reaction order. The activation energies ranged from 54.7 to 60.9 kJ/mol, 62.9 to 108.7 kJ/mol, and 18.4 to 257.9 kJ/mol for the first, second and the third decomposition stages respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
NASA Astrophysics Data System (ADS)
Torregrosa, A. J.; Broatch, A.; Margot, X.; García-Tíscar, J.
2016-08-01
An experimental methodology is proposed to assess the noise emission of centrifugal turbocompressors like those of automotive turbochargers. A step-by-step procedure is detailed, starting from the theoretical considerations of sound measurement in flow ducts and examining specific experimental setup guidelines and signal processing routines. Special care is taken regarding some limiting factors that adversely affect the measuring of sound intensity in ducts, namely calibration, sensor placement and frequency ranges and restrictions. In order to provide illustrative examples of the proposed techniques and results, the methodology has been applied to the acoustic evaluation of a small automotive turbocharger in a flow bench. Samples of raw pressure spectra, decomposed pressure waves, calibration results, accurate surge characterization and final compressor noise maps and estimated spectrograms are provided. The analysis of selected frequency bands successfully shows how different, known noise phenomena of particular interest such as mid-frequency "whoosh noise" and low-frequency surge onset are correlated with operating conditions of the turbocharger. Comparison against external inlet orifice intensity measurements shows good correlation and improvement with respect to alternative wave decomposition techniques.
Al-Qazzaz, Noor Kamal; Ali, Sawal; Ahmad, Siti Anom; Escudero, Javier
2017-07-01
The aim of the present study was to discriminate the electroencephalogram (EEG) of 5 patients with vascular dementia (VaD), 15 patients with stroke-related mild cognitive impairment (MCI), and 15 control normal subjects during a working memory (WM) task. We used independent component analysis (ICA) and wavelet transform (WT) as a hybrid preprocessing approach for EEG artifact removal. Three different features were extracted from the cleaned EEG signals: spectral entropy (SpecEn), permutation entropy (PerEn) and Tsallis entropy (TsEn). Two classification schemes were applied - support vector machine (SVM) and k-nearest neighbors (kNN) - with fuzzy neighborhood preserving analysis with QR-decomposition (FNPAQR) as a dimensionality reduction technique. The FNPAQR dimensionality reduction technique increased the SVM classification accuracy from 82.22% to 90.37% and from 82.6% to 86.67% for kNN. These results suggest that FNPAQR consistently improves the discrimination of VaD, MCI patients and control normal subjects and it could be a useful feature selection to help the identification of patients with VaD and MCI.
SDSS-IV MaNGA: bulge-disc decomposition of IFU data cubes (BUDDI)
NASA Astrophysics Data System (ADS)
Johnston, Evelyn J.; Häußler, Boris; Aragón-Salamanca, Alfonso; Merrifield, Michael R.; Bamford, Steven; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Fu, Hai; Law, David; Nitschelm, Christian; Thomas, Daniel; Roman Lopes, Alexandre; Wake, David; Yan, Renbin
2017-02-01
With the availability of large integral field unit (IFU) spectral surveys of nearby galaxies, there is now the potential to extract spectral information from across the bulges and discs of galaxies in a systematic way. This information can address questions such as how these components built up with time, how galaxies evolve and whether their evolution depends on other properties of the galaxy such as its mass or environment. We present bulge-disc decomposition of IFU data cubes (BUDDI), a new approach to fit the two-dimensional light profiles of galaxies as a function of wavelength to extract the spectral properties of these galaxies' discs and bulges. The fitting is carried out using GALFITM, a modified form of GALFIT which can fit multiwaveband images simultaneously. The benefit of this technique over traditional multiwaveband fits is that the stellar populations of each component can be constrained using knowledge over the whole image and spectrum available. The decomposition has been developed using commissioning data from the Sloan Digital Sky Survey-IV Mapping Nearby Galaxies at APO (MaNGA) survey with redshifts z < 0.14 and coverage of at least 1.5 effective radii for a spatial resolution of 2.5 arcsec full width at half-maximum and field of view of > 22 arcsec, but can be applied to any IFU data of a nearby galaxy with similar or better spatial resolution and coverage. We present an overview of the fitting process, the results from our tests, and we finish with example stellar population analyses of early-type galaxies from the MaNGA survey to give an indication of the scientific potential of applying bulge-disc decomposition to IFU data.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.
Wavelet-based techniques for the gamma-ray sky
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDermott, Samuel D.; Fox, Patrick J.; Cholis, Ilias
2016-07-01
Here, we demonstrate how the image analysis technique of wavelet decomposition can be applied to the gamma-ray sky to separate emission on different angular scales. New structures on scales that differ from the scales of the conventional astrophysical foreground and background uncertainties can be robustly extracted, allowing a model-independent characterization with no presumption of exact signal morphology. As a test case, we generate mock gamma-ray data to demonstrate our ability to extract extended signals without assuming a fixed spatial template. For some point source luminosity functions, our technique also allows us to differentiate a diffuse signal in gamma-rays from darkmore » matter annihilation and extended gamma-ray point source populations in a data-driven way.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le, Huy Q.; Molloi, Sabee
Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar tomore » the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine.« less
Constraint-based integration of planning and scheduling for space-based observatory management
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Steven F.
1994-01-01
Progress toward the development of effective, practical solutions to space-based observatory scheduling problems within the HSTS scheduling framework is reported. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) short-term observation scheduling problem. The work was motivated by the limitations of the current solution and, more generally, by the insufficiency of classical planning and scheduling approaches in this problem context. HSTS has subsequently been used to develop improved heuristic solution techniques in related scheduling domains and is currently being applied to develop a scheduling tool for the upcoming Submillimeter Wave Astronomy Satellite (SWAS) mission. The salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research are summarized. Then, some key problem decomposition techniques underlying the integrated planning and scheduling approach to the HST problem are described; research results indicate that these techniques provide leverage in solving space-based observatory scheduling problems. Finally, more recently developed constraint-posting scheduling procedures and the current SWAS application focus are summarized.
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K
2018-06-01
Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.
A hybrid perturbation-Galerkin technique for partial differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Anderson, Carl M.
1990-01-01
A two-step hybrid perturbation-Galerkin technique for improving the usefulness of perturbation solutions to partial differential equations which contain a parameter is presented and discussed. In the first step of the method, the leading terms in the asymptotic expansion(s) of the solution about one or more values of the perturbation parameter are obtained using standard perturbation methods. In the second step, the perturbation functions obtained in the first step are used as trial functions in a Bubnov-Galerkin approximation. This semi-analytical, semi-numerical hybrid technique appears to overcome some of the drawbacks of the perturbation and Galerkin methods when they are applied by themselves, while combining some of the good features of each. The technique is illustrated first by a simple example. It is then applied to the problem of determining the flow of a slightly compressible fluid past a circular cylinder and to the problem of determining the shape of a free surface due to a sink above the surface. Solutions obtained by the hybrid method are compared with other approximate solutions, and its possible application to certain problems associated with domain decomposition is discussed.
Changing vacancy balance in ZnO by tuning synthesis between zinc/oxygen lean conditions
NASA Astrophysics Data System (ADS)
Venkatachalapathy, Vishnukanthan; Galeckas, Augustinas; Zubiaga, Asier; Tuomisto, Filip; Kuznetsov, Andrej Yu.
2010-08-01
The nature of intrinsic defects in ZnO films grown by metal organic vapor phase epitaxy was studied by positron annihilation and photoluminescence spectroscopy techniques. The supply of Zn and O during the film synthesis was varied by applying different growth temperatures (325-485 °C), affecting decomposition of the metal organic precursors. The microscopic identification of vacancy complexes was derived from a systematic variation in the defect balance in accordance with Zn/O supply trends.
Breast tissue decomposition with spectral distortion correction: A postmortem study
Ding, Huanjun; Zhao, Bo; Baturin, Pavlo; Behroozi, Farnaz; Molloi, Sabee
2014-01-01
Purpose: To investigate the feasibility of an accurate measurement of water, lipid, and protein composition of breast tissue using a photon-counting spectral computed tomography (CT) with spectral distortion corrections. Methods: Thirty-eight postmortem breasts were imaged with a cadmium-zinc-telluride-based photon-counting spectral CT system at 100 kV. The energy-resolving capability of the photon-counting detector was used to separate photons into low and high energy bins with a splitting energy of 42 keV. The estimated mean glandular dose for each breast ranged from 1.8 to 2.2 mGy. Two spectral distortion correction techniques were implemented, respectively, on the raw images to correct the nonlinear detector response due to pulse pileup and charge-sharing artifacts. Dual energy decomposition was then used to characterize each breast in terms of water, lipid, and protein content. In the meantime, the breasts were chemically decomposed into their respective water, lipid, and protein components to provide a gold standard for comparison with dual energy decomposition results. Results: The accuracy of the tissue compositional measurement with spectral CT was determined by comparing to the reference standard from chemical analysis. The averaged root-mean-square error in percentage composition was reduced from 15.5% to 2.8% after spectral distortion corrections. Conclusions: The results indicate that spectral CT can be used to quantify the water, lipid, and protein content in breast tissue. The accuracy of the compositional analysis depends on the applied spectral distortion correction technique. PMID:25281953
Decomposition of Multi-player Games
NASA Astrophysics Data System (ADS)
Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael
Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.
Lu, Yan; Li, Gang; Liu, Wei; Yuan, Hongyan; Xiao, Dan
2018-08-15
It is known that most of the refractory ore are the basis of national economy and widely applied in various fields, however, the complexity of the chemical composition and the diversity of the crystallinity in the mineral phases make the sample pre-treatment of refractory ore still remains a challenge. In this work, the complete decomposition of the refractory ore sample can be achieved just by exposing the solid fusion agent and the refractory ore sample in the microwave irradiation environment for a few minutes, and induced by a drop of water. A digestion time of 15 min for 3.0 g solid fusion agent mixture of sodium peroxide/sodium carbonate (Na 2 O 2 /Na 2 CO 3 ) in a corundum crucible via microwave heating is sufficient to decompose 0.1 g refractory ore sample. An excellent microwave digestion solid agent should meet the following conditions, a good decomposition ability, an outstanding ability of absorbing microwave energy and converting it into heat quickly, a higher melting point than the decomposing temperature of the ore sample. In the research, the induction effect of water plays an important role for the microwave digestion. The energy which is released by the reaction of water and the solid fusion agent (Na 2 O 2 ) is the key to decompose refractory ore samples with solid fusion agent, which replenished the total energy required for the microwave digestion and made the microwave digestion completed successfully. This microwave digestion technique has good reproducibility and precision, RSD % for Mo, Fe, Ti, Cr and W in the refractory ore samples were all better than 6, except RSD % for Be of about 8 because of the influence of matrix-effect. Meanwhile, the analysis results of the elements in the refractory ore samples provided by the microwave digestion technique were all in good agreement with the analysis results provided by the traditional fusion method except for Cr in the mixture ore samples. In the study, the non-linear dependence of the electromagnetic and thermal properties of the solid fusion agent on temperature under microwave irradiation and the selective heating of microwave are fully applied in this simple microwave technique. Comparing to the traditional fusion decomposition method, this microwave digestion technique is a simple, economical, fast and energy-saving sample pre-treatment technique. Copyright © 2018 Elsevier B.V. All rights reserved.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
NASA Astrophysics Data System (ADS)
Emge, Darren K.; Adalı, Tülay
2014-06-01
As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.
Divergence-free approach for obtaining decompositions of quantum-optical processes
NASA Astrophysics Data System (ADS)
Sabapathy, K. K.; Ivan, J. S.; García-Patrón, R.; Simon, R.
2018-02-01
Operator-sum representations of quantum channels can be obtained by applying the channel to one subsystem of a maximally entangled state and deploying the channel-state isomorphism. However, for continuous-variable systems, such schemes contain natural divergences since the maximally entangled state is ill defined. We introduce a method that avoids such divergences by utilizing finitely entangled (squeezed) states and then taking the limit of arbitrary large squeezing. Using this method, we derive an operator-sum representation for all single-mode bosonic Gaussian channels where a unique feature is that both quantum-limited and noisy channels are treated on an equal footing. This technique facilitates a proof that the rank-1 Kraus decomposition for Gaussian channels at its respective entanglement-breaking thresholds, obtained in the overcomplete coherent-state basis, is unique. The methods could have applications to simulation of continuous-variable channels.
Forecasting stochastic neural network based on financial empirical mode decomposition.
Wang, Jie; Wang, Jun
2017-06-01
In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Weissenberger, S.; Cuk, S. M.
1973-01-01
This report presents the development and description of the decomposition aggregation approach to stability investigations of high dimension mathematical models of dynamic systems. The high dimension vector differential equation describing a large dynamic system is decomposed into a number of lower dimension vector differential equations which represent interconnected subsystems. Then a method is described by which the stability properties of each subsystem are aggregated into a single vector Liapunov function, representing the aggregate system model, consisting of subsystem Liapunov functions as components. A linear vector differential inequality is then formed in terms of the vector Liapunov function. The matrix of the model, which reflects the stability properties of the subsystems and the nature of their interconnections, is analyzed to conclude over-all system stability characteristics. The technique is applied in detail to investigate the stability characteristics of a dynamic model of a hypothetical spinning Skylab.
Szlavik, Robert B
2016-02-01
The characterization of peripheral nerve fiber distributions, in terms of diameter or velocity, is of clinical significance because information associated with these distributions can be utilized in the differential diagnosis of peripheral neuropathies. Electro-diagnostic techniques can be applied to the investigation of peripheral neuropathies and can yield valuable diagnostic information while being minimally invasive. Nerve conduction velocity studies are single parameter tests that yield no detailed information regarding the characteristics of the population of nerve fibers that contribute to the compound-evoked potential. Decomposition of the compound-evoked potential, such that the velocity or diameter distribution of the contributing nerve fibers may be determined, is necessary if information regarding the population of contributing nerve fibers is to be ascertained from the electro-diagnostic study. In this work, a perturbation-based decomposition of compound-evoked potentials is proposed that facilitates determination of the fiber diameter distribution associated with the compound-evoked potential. The decomposition is based on representing the single fiber-evoked potential, associated with each diameter class, as being perturbed by contributions, of varying degree, from all the other diameter class single fiber-evoked potentials. The resultant estimator of the contributing nerve fiber diameter distribution is valid for relatively large separations in diameter classes. It is also useful in situations where the separation between diameter classes is small and the concomitant single fiber-evoked potentials are not orthogonal.
Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H
2014-08-08
For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Exploiting symmetries in the modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Andersen, C. M.; Tanner, John A.
1989-01-01
A computational procedure is presented for reducing the size of the analysis models of tires having unsymmetric material, geometry and/or loading. The two key elements of the procedure when applied to anisotropic tires are: (1) decomposition of the stiffness matrix into the sum of an orthotropic and nonorthotropic parts; and (2) successive application of the finite-element method and the classical Rayleigh-Ritz technique. The finite-element method is first used to generate few global approximation vectors (or modes). Then the amplitudes of these modes are computed by using the Rayleigh-Ritz technique. The proposed technique has high potential for handling practical tire problems with anisotropic materials, unsymmetric imperfections and asymmetric loading. It is also particularly useful for use with three-dimensional finite-element models of tires.
Multispectral Wavefronts Retrieval in Digital Holographic Three-Dimensional Imaging Spectrometry
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2010-04-01
This paper deals with a recently developed passive interferometric technique for retrieving a set of spectral components of wavefronts that are propagating from a spatially incoherent, polychromatic object. The technique is based on measurement of 5-D spatial coherence function using a suitably designed interferometer. By applying signal processing, including aperture synthesis and spectral decomposition, one may obtains a set of wavefronts of different spectral bands. Since each wavefront is equivalent to the complex Fresnel hologram at a particular spectrum of the polychromatic object, application of the conventional Fresnel transform yields 3-D image of different spectrum. Thus, this technique of multispectral wavefronts retrieval provides a new type of 3-D imaging spectrometry based on a fully passive interferometry. Experimental results are also shown to demonstrate the validity of the method.
Convex decomposition techniques applied to handlebodies
NASA Astrophysics Data System (ADS)
Ortiz, Marcos A.
Contact structures on 3-manifolds are 2-plane fields satisfying a set of conditions. The study of contact structures can be traced back for over two-hundred years, and has been of interest to mathematicians such as Hamilton, Jacobi, Cartan, and Darboux. In the late 1900's, the study of these structures gained momentum as the work of Eliashberg and Bennequin described subtleties in these structures that could be used to find new invariants. In particular, it was discovered that contact structures fell into two classes: tight and overtwisted. While overtwisted contact structures are relatively well understood, tight contact structures remain an area of active research. One area of active study, in particular, is the classification of tight contact structures on 3-manifolds. This began with Eliashberg, who showed that the standard contact structure in real three-dimensional space is unique, and it has been expanded on since. Some major advancements and new techniques were introduced by Kanda, Honda, Etnyre, Kazez, Matic, and others. Convex decomposition theory was one product of these explorations. This technique involves cutting a manifold along convex surfaces (i.e. surfaces arranged in a particular way in relation to the contact structure) and investigating a particular set on these cutting surfaces to say something about the original contact structure. In the cases where the cutting surfaces are fairly nice, in some sense, Honda established a correspondence between information on the cutting surfaces and the tight contact structures supported by the original manifold. In this thesis, convex surface theory is applied to the case of handlebodies with a restricted class of dividing sets. For some cases, classification is achieved, and for others, some interesting patterns arise and are investigated.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399
Phase-based motion magnification video for monitoring of vital signals using the Hermite transform
NASA Astrophysics Data System (ADS)
Brieva, Jorge; Moya-Albor, Ernesto
2017-11-01
In this paper we present a new Eulerian phase-based motion magnification technique using the Hermite Transform (HT) decomposition that is inspired in the Human Vision System (HVS). We test our method in one sequence of the breathing of a newborn baby and on a video sequence that shows the heartbeat on the wrist. We detect and magnify the heart pulse applying our technique. Our motion magnification approach is compared to the Laplacian phase based approach by means of quantitative metrics (based on the RMS error and the Fourier transform) to measure the quality of both reconstruction and magnification. In addition a noise robustness analysis is performed for the two methods.
Relaxations to Sparse Optimization Problems and Applications
NASA Astrophysics Data System (ADS)
Skau, Erik West
Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we apply a maximum entropy model, guided by the social media data, to estimate the flooded regions during a 2013 flood in Boulder, CO and show that the results are comparable to those obtained using expert information.
Decomposition of Metrosideros polymorpha leaf litter along elevational gradients in Hawaii
Paul G. Scowcroft; Douglas R. Turner; Peter M. Vitousek
2000-01-01
We examined interactions between temperature, soil development, and decomposition on three elevational gradients, the upper and lower ends of each being situated on a common lava flow or ash deposit. We used the reciprocal transplant technique to estimate decomposition rates of Metrosideros polymorpha leaf litter during a three-year period at warm...
NASA Astrophysics Data System (ADS)
Kafka, Orion L.; Yu, Cheng; Shakoor, Modesar; Liu, Zeliang; Wagner, Gregory J.; Liu, Wing Kam
2018-04-01
A data-driven mechanistic modeling technique is applied to a system representative of a broken-up inclusion ("stringer") within drawn nickel-titanium wire or tube, e.g., as used for arterial stents. The approach uses a decomposition of the problem into a training stage and a prediction stage. It is applied to compute the fatigue crack incubation life of a microstructure of interest under high-cycle fatigue. A parametric study of a matrix-inclusion-void microstructure is conducted. The results indicate that, within the range studied, a larger void between halves of the inclusion increases fatigue life, while larger inclusion diameter reduces fatigue life.
Peng, Bo; Kowalski, Karol
2017-01-25
In this paper, we apply reverse Cuthill-McKee (RCM) algorithm to transform two-electron integral tensors to their block diagonal forms. By further applying Cholesky decomposition (CD) on each of the diagonal blocks, we are able to represent the high-dimensional two-electron integral tensors in terms of permutation matrices and low-rank Cholesky vectors. This representation facilitates low-rank factorizations of high-dimensional tensor contractions in post-Hartree-Fock calculations. Finally, we discuss the second-order Møller-Plesset (MP2) method and the linear coupled-cluster model with doubles (L-CCD) as examples to demonstrate the efficiency of this technique in representing the two-electron integrals in a compact form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Bo; Kowalski, Karol
In this paper, we apply reverse Cuthill-McKee (RCM) algorithm to transform two-electron integral tensors to their block diagonal forms. By further applying Cholesky decomposition (CD) on each of the diagonal blocks, we are able to represent the high-dimensional two-electron integral tensors in terms of permutation matrices and low-rank Cholesky vectors. This representation facilitates low-rank factorizations of high-dimensional tensor contractions in post-Hartree-Fock calculations. Finally, we discuss the second-order Møller-Plesset (MP2) method and the linear coupled-cluster model with doubles (L-CCD) as examples to demonstrate the efficiency of this technique in representing the two-electron integrals in a compact form.
Defect inspection using a time-domain mode decomposition technique
NASA Astrophysics Data System (ADS)
Zhu, Jinlong; Goddard, Lynford L.
2018-03-01
In this paper, we propose a technique called time-varying frequency scanning (TVFS) to meet the challenges in killer defect inspection. The proposed technique enables the dynamic monitoring of defects by checking the hopping in the instantaneous frequency data and the classification of defect types by comparing the difference in frequencies. The TVFS technique utilizes the bidimensional empirical mode decomposition (BEMD) method to separate the defect information from the sea of system errors. This significantly improve the signal-to-noise ratio (SNR) and moreover, it potentially enables reference-free defect inspection.
a Hybrid Method in Vegetation Height Estimation Using Polinsar Images of Campaign Biosar
NASA Astrophysics Data System (ADS)
Dehnavi, S.; Maghsoudi, Y.
2015-12-01
Recently, there have been plenty of researches on the retrieval of forest height by PolInSAR data. This paper aims at the evaluation of a hybrid method in vegetation height estimation based on L-band multi-polarized air-borne SAR images. The SAR data used in this paper were collected by the airborne E-SAR system. The objective of this research is firstly to describe each interferometry cross correlation as a sum of contributions corresponding to single bounce, double bounce and volume scattering processes. Then, an ESPIRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm is implemented, to determine the interferometric phase of each local scatterer (ground and canopy). Secondly, the canopy height is estimated by phase differencing method, according to the RVOG (Random Volume Over Ground) concept. The applied model-based decomposition method is unrivaled, as it is not limited to specific type of vegetation, unlike the previous decomposition techniques. In fact, the usage of generalized probability density function based on the nth power of a cosine-squared function, which is characterized by two parameters, makes this method useful for different vegetation types. Experimental results show the efficiency of the approach for vegetation height estimation in the test site.
A leakage-free resonance sparse decomposition technique for bearing fault detection in gearboxes
NASA Astrophysics Data System (ADS)
Osman, Shazali; Wang, Wilson
2018-03-01
Most of rotating machinery deficiencies are related to defects in rolling element bearings. Reliable bearing fault detection still remains a challenging task, especially for bearings in gearboxes as bearing-defect-related features are nonstationary and modulated by gear mesh vibration. A new leakage-free resonance sparse decomposition (LRSD) technique is proposed in this paper for early bearing fault detection of gearboxes. In the proposed LRSD technique, a leakage-free filter is suggested to remove strong gear mesh and shaft running signatures. A kurtosis and cosine distance measure is suggested to select appropriate redundancy r and quality factor Q. The signal residual is processed by signal sparse decomposition for highpass and lowpass resonance analysis to extract representative features for bearing fault detection. The effectiveness of the proposed technique is verified by a succession of experimental tests corresponding to different gearbox and bearing conditions.
Mohn, Joachim; Gutjahr, Wilhelm; Toyoda, Sakae; Harris, Eliza; Ibraim, Erkan; Geilmann, Heike; Schleppi, Patrick; Kuhn, Thomas; Lehmann, Moritz F; Decock, Charlotte; Werner, Roland A; Yoshida, Naohiro; Brand, Willi A
2016-09-08
In the last few years, the study of N 2 O site-specific nitrogen isotope composition has been established as a powerful technique to disentangle N 2 O emission pathways. This trend has been accelerated by significant analytical progress in the field of isotope-ratio mass-spectrometry (IRMS) and more recently quantum cascade laser absorption spectroscopy (QCLAS). Methods The ammonium nitrate (NH 4 NO 3 ) decomposition technique provides a strategy to scale the 15 N site-specific (SP ≡ δ 15 N α - δ 15 N β ) and bulk (δ 15 N bulk = (δ 15 N α + δ 15 N β )/2) isotopic composition of N 2 O against the international standard for the 15 N/ 14 N isotope ratio (AIR-N 2 ). Within the current project 15 N fractionation effects during thermal decomposition of NH 4 NO 3 on the N 2 O site preference were studied using static and dynamic decomposition techniques. The validity of the NH 4 NO 3 decomposition technique to link NH 4 + and NO 3 - moiety-specific δ 15 N analysis by IRMS to the site-specific nitrogen isotopic composition of N 2 O was confirmed. However, the accuracy of this approach for the calibration of δ 15 N α and δ 15 N β values was found to be limited by non-quantitative NH 4 NO 3 decomposition in combination with substantially different isotope enrichment factors for the conversion of the NO 3 - or NH 4 + nitrogen atom into the α or β position of the N 2 O molecule. The study reveals that the completeness and reproducibility of the NH 4 NO 3 decomposition reaction currently confine the anchoring of N 2 O site-specific isotopic composition to the international isotope ratio scale AIR-N 2 . The authors suggest establishing a set of N 2 O isotope reference materials with appropriate site-specific isotopic composition, as community standards, to improve inter-laboratory compatibility. This article is protected by copyright. All rights reserved.
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Mcelveen, R. P.; Kolb, M. A.
1986-01-01
A multifaceted decomposition of a nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
1983-07-01
the decomposition reaction (Leider, 1981; Kageyama, 1973; Wolfrom, 1956), 2) Hydrolysis of linkages between glucose units (Urbanski, 1964), 3... dehydration ), 2) Acceleration period (to 50 percent decomposition ), 3) First order reaction rate period. The products of thermal decomposition of...simple mechanism to clean an entire building at once. o Depending on the contaminant, thermal decomposition and or hydrolysis may occur. o May be
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2012-01-01
Partial fraction decomposition is a useful technique often taught at senior secondary or undergraduate levels to handle integrations, inverse Laplace transforms or linear ordinary differential equations, etc. In recent years, an improved Heaviside's approach to partial fraction decomposition was introduced and developed by the author. An important…
NASA Astrophysics Data System (ADS)
Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang
2018-05-01
Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, K.; Uetsuka, H.; Ohnuma, H.
The infrared chemiluminescence technique has been applied to the selective formation of syngas (CO + H{sub 2}) from the oxidation of small alkanes on Pt, the decomposition and oxidation of CH{sub 3}OH and HCOOH on Pt and Ni, and CO oxidation on Pd(111) and Pd(110). The different internal (vibrational and rotational) energy states of the CO and CO{sub 2} products have been observed, which reflect the difference in the dynamics of these reactions.
Numerical computation of linear instability of detonations
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry; Kasimov, Aslan
2017-11-01
We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.
Three geographic decomposition approaches in transportation network analysis
DOT National Transportation Integrated Search
1980-03-01
This document describes the results of research into the application of geographic decomposition techniques to practical transportation network problems. Three approaches are described for the solution of the traffic assignment problem. One approach ...
Application of Decomposition to Transportation Network Analysis
DOT National Transportation Integrated Search
1976-10-01
This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...
Transportation Network Analysis and Decomposition Methods
DOT National Transportation Integrated Search
1978-03-01
The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...
NASA Astrophysics Data System (ADS)
Chang, Jianhua; Zhu, Lingyan; Li, Hongxu; Xu, Fan; Liu, Binggang; Yang, Zhenbo
2018-01-01
Empirical mode decomposition (EMD) is widely used to analyze the non-linear and non-stationary signals for noise reduction. In this study, a novel EMD-based denoising method, referred to as EMD with soft thresholding and roughness penalty (EMD-STRP), is proposed for the Lidar signal denoising. With the proposed method, the relevant and irrelevant intrinsic mode functions are first distinguished via a correlation coefficient. Then, the soft thresholding technique is applied to the irrelevant modes, and the roughness penalty technique is applied to the relevant modes to extract as much information as possible. The effectiveness of the proposed method was evaluated using three typical signals contaminated by white Gaussian noise. The denoising performance was then compared to the denoising capabilities of other techniques, such as correlation-based EMD partial reconstruction, correlation-based EMD hard thresholding, and wavelet transform. The use of EMD-STRP on the measured Lidar signal resulted in the noise being efficiently suppressed, with an improved signal to noise ratio of 22.25 dB and an extended detection range of 11 km.
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Kolb, M. A.
1987-01-01
A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
Cicone, A; Liu, J; Zhou, H
2016-04-13
Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Aied, H.; González, A.; Cantero, D.
2016-01-01
The growth of heavy traffic together with aggressive environmental loads poses a threat to the safety of an aging bridge stock. Often, damage is only detected via visual inspection at a point when repairing costs can be quite significant. Ideally, bridge managers would want to identify a stiffness change as soon as possible, i.e., as it is occurring, to plan for prompt measures before reaching a prohibitive cost. Recent developments in signal processing techniques such as wavelet analysis and empirical mode decomposition (EMD) have aimed to address this need by identifying a stiffness change from a localised feature in the structural response to traffic. However, the effectiveness of these techniques is limited by the roughness of the road profile, the vehicle speed and the noise level. In this paper, ensemble empirical mode decomposition (EEMD) is applied by the first time to the acceleration response of a bridge model to a moving load with the purpose of capturing sudden stiffness changes. EEMD is more adaptive and appears to be better suited to non-linear signals than wavelets, and it reduces the mode mixing problem present in EMD. EEMD is tested in a variety of theoretical 3D vehicle-bridge interaction scenarios. Stiffness changes are successfully identified, even for small affected regions, relatively poor profiles, high vehicle speeds and significant noise. The latter is due to the ability of EEMD to separate high frequency components associated to sudden stiffness changes from other frequency components associated to the vehicle-bridge interaction system.
NASA Astrophysics Data System (ADS)
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-12-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.
Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection
NASA Astrophysics Data System (ADS)
Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu
2018-05-01
A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.
On the decomposition of synchronous state mechines using sequence invariant state machines
NASA Technical Reports Server (NTRS)
Hebbalalu, K.; Whitaker, S.; Cameron, K.
1992-01-01
This paper presents a few techniques for the decomposition of Synchronous State Machines of medium to large sizes into smaller component machines. The methods are based on the nature of the transitions and sequences of states in the machine and on the number and variety of inputs to the machine. The results of the decomposition, and of using the Sequence Invariant State Machine (SISM) Design Technique for generating the component machines, include great ease and quickness in the design and implementation processes. Furthermore, there is increased flexibility in making modifications to the original design leading to negligible re-design time.
Muravyev, Nikita V; Monogarov, Konstantin A; Asachenko, Andrey F; Nechaev, Mikhail S; Ananyev, Ivan V; Fomenkov, Igor V; Kiselev, Vitaly G; Pivkina, Alla N
2016-12-21
Thermal decomposition of a novel promising high-performance explosive dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50) was studied using a number of thermal analysis techniques (thermogravimetry, differential scanning calorimetry, and accelerating rate calorimetry, ARC). To obtain more comprehensive insight into the kinetics and mechanism of TKX-50 decomposition, a variety of complementary thermoanalytical experiments were performed under various conditions. Non-isothermal and isothermal kinetics were obtained at both atmospheric and low (up to 0.3 Torr) pressures. The gas products of thermolysis were detected in situ using IR spectroscopy, and the structure of solid-state decomposition products was determined by X-ray diffraction and scanning electron microscopy. Diammonium 5,5'-bistetrazole-1,1'-diolate (ABTOX) was directly identified to be the most important intermediate of the decomposition process. The important role of bistetrazole diol (BTO) in the mechanism of TKX-50 decomposition was also rationalized by thermolysis experiments with mixtures of TKX-50 and BTO. Several widely used thermoanalytical data processing techniques (Kissinger, isoconversional, formal kinetic approaches, etc.) were independently benchmarked against the ARC data, which are more germane to the real storage and application conditions of energetic materials. Our study revealed that none of the Arrhenius parameters reported before can properly describe the complex two-stage decomposition process of TKX-50. In contrast, we showed the superior performance of the isoconversional methods combined with isothermal measurements, which yielded the most reliable kinetic parameters of TKX-50 thermolysis. In contrast with the existing reports, the thermal stability of TKX-50 was determined in the ARC experiments to be lower than that of hexogen, but close to that of hexanitrohexaazaisowurtzitane (CL-20).
ERIC Educational Resources Information Center
Schizas, Dimitrios; Katrana, Evagelia; Stamou, George
2013-01-01
In the present study we used the technique of word association tests to assess students' cognitive structures during the learning period. In particular, we tried to investigate what students living near a protected area in Greece (Dadia forest) knew about the phenomenon of decomposition. Decomposition was chosen as a stimulus word because it…
Low-dimensional modelling of a transient cylinder wake using double proper orthogonal decomposition
NASA Astrophysics Data System (ADS)
Siegel, Stefan G.; Seidel, J.?Rgen; Fagley, Casey; Luchtenburg, D. M.; Cohen, Kelly; McLaughlin, Thomas
For the systematic development of feedback flow controllers, a numerical model that captures the dynamic behaviour of the flow field to be controlled is required. This poses a particular challenge for flow fields where the dynamic behaviour is nonlinear, and the governing equations cannot easily be solved in closed form. This has led to many versions of low-dimensional modelling techniques, which we extend in this work to represent better the impact of actuation on the flow. For the benchmark problem of a circular cylinder wake in the laminar regime, we introduce a novel extension to the proper orthogonal decomposition (POD) procedure that facilitates mode construction from transient data sets. We demonstrate the performance of this new decomposition by applying it to a data set from the development of the limit cycle oscillation of a circular cylinder wake simulation as well as an ensemble of transient forced simulation results. The modes obtained from this decomposition, which we refer to as the double POD (DPOD) method, correctly track the changes of the spatial modes both during the evolution of the limit cycle and when forcing is applied by transverse translation of the cylinder. The mode amplitudes, which are obtained by projecting the original data sets onto the truncated DPOD modes, can be used to construct a dynamic mathematical model of the wake that accurately predicts the wake flow dynamics within the lock-in region at low forcing amplitudes. This low-dimensional model, derived using nonlinear artificial neural network based system identification methods, is robust and accurate and can be used to simulate the dynamic behaviour of the wake flow. We demonstrate this ability not just for unforced and open-loop forced data, but also for a feedback-controlled simulation that leads to a 90% reduction in lift fluctuations. This indicates the possibility of constructing accurate dynamic low-dimensional models for feedback control by using unforced and transient forced data only.
NASA Astrophysics Data System (ADS)
Herrera, I.; Herrera, G. S.
2015-12-01
Most geophysical systems are macroscopic physical systems. The behavior prediction of such systems is carried out by means of computational models whose basic models are partial differential equations (PDEs) [1]. Due to the enormous size of the discretized version of such PDEs it is necessary to apply highly parallelized super-computers. For them, at present, the most efficient software is based on non-overlapping domain decomposition methods (DDM). However, a limiting feature of the present state-of-the-art techniques is due to the kind of discretizations used in them. Recently, I. Herrera and co-workers using 'non-overlapping discretizations' have produced the DVS-Software which overcomes this limitation [2]. The DVS-software can be applied to a great variety of geophysical problems and achieves very high parallel efficiencies (90%, or so [3]). It is therefore very suitable for effectively applying the most advanced parallel supercomputers available at present. In a parallel talk, in this AGU Fall Meeting, Graciela Herrera Z. will present how this software is being applied to advance MOD-FLOW. Key Words: Parallel Software for Geophysics, High Performance Computing, HPC, Parallel Computing, Domain Decomposition Methods (DDM)REFERENCES [1]. Herrera Ismael and George F. Pinder, Mathematical Modelling in Science and Engineering: An axiomatic approach", John Wiley, 243p., 2012. [2]. Herrera, I., de la Cruz L.M. and Rosas-Medina A. "Non Overlapping Discretization Methods for Partial, Differential Equations". NUMER METH PART D E, 30: 1427-1454, 2014, DOI 10.1002/num 21852. (Open source) [3]. Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
NASA Astrophysics Data System (ADS)
Svintsitskiy, Dmitry A.; Kardash, Tatyana Yu.; Slavinskaya, Elena M.; Stonkus, Olga A.; Koscheev, Sergei V.; Boronin, Andrei I.
2018-01-01
The mixed silver-copper oxide Ag2Cu2O3 with a paramelaconite crystal structure is a promising material for catalytic applications. The as-prepared sample of Ag2Cu2O3 consisted of brick-like particles extended along the [001] direction. A combination of physicochemical techniques such as TEM, XPS and XRD was applied to investigate the structural features of this mixed silver-copper oxide. The thermal stability of Ag2Cu2O3 was investigated using in situ XRD under different reaction conditions, including a catalytic CO + O2 mixture. The first step of Ag2Cu2O3 decomposition was accompanied by the appearance of ensembles consisting of silver nanoparticles with sizes of 5-15 nm. Silver nanoparticles were strongly oriented to each other and to the surface of the initial Ag2Cu2O3 bricks. Based on the XRD data, it was shown that the release of silver occurred along the a and b axes of the paramelaconite structure. Partial decomposition of Ag2Cu2O3 accompanied by the formation of silver nanoparticles was observed during prolonged air storage under ambient conditions. The high reactivity is discussed as a reason for spontaneous decomposition during Ag2Cu2O3 storage. The full decomposition of the mixed oxide into metallic silver and copper (II) oxide took place at temperatures higher than 300 °C regardless of the nature of the reaction medium (helium, air, CO + O2). Catalytic properties of partially and fully decomposed samples of mixed silver-copper oxide were measured in low-temperature CO oxidation and C2H4 epoxidation reactions.
Wavelet-bounded empirical mode decomposition for measured time series analysis
NASA Astrophysics Data System (ADS)
Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2018-01-01
Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.
Efficient morse decompositions of vector fields.
Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene
2008-01-01
Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
NASA Astrophysics Data System (ADS)
Finn, Conor; Lizier, Joseph
2018-04-01
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Sun, JinWei; Rolfe, Peter
2010-12-01
Near-infrared spectroscopy (NIRS) can be used as the basis of non-invasive neuroimaging that may allow the measurement of haemodynamic changes in the human brain evoked by applied stimuli. Since this technique is very sensitive, physiological interference arising from the cardiac cycle and breathing can significantly affect the signal quality. Such interference is difficult to remove by conventional techniques because it occurs not only in the extracerebral layer but also in the brain tissue itself. Previous work on this problem employing temporal filtering, spatial filtering, and adaptive filtering have exhibited good performance for recovering brain activity data in evoked response studies. However, in this study, we present a time-frequency adaptive method for physiological interference reduction based on the combination of empirical mode decomposition (EMD) and Hilbert spectral analysis (HSA). Monte Carlo simulations based on a five-layered slab model of a human adult head were implemented to evaluate our methodology. We applied an EMD algorithm to decompose the NIRS time series derived from Monte Carlo simulations into a series of intrinsic mode functions (IMFs). In order to identify the IMFs associated with symmetric interference, the extracted components were then Hilbert transformed from which the instantaneous frequencies could be acquired. By reconstructing the NIRS signal by properly selecting IMFs, we determined that the evoked brain response is effectively filtered out with even higher signal-to-noise ratio (SNR). The results obtained demonstrated that EMD, combined with HSA, can effectively separate, identify and remove the contamination from the evoked brain response obtained with NIRS using a simple single source-detector pair.
An Alternate Method for Estimating Dynamic Height from XBT Profiles Using Empirical Vertical Modes
NASA Technical Reports Server (NTRS)
Lagerloef, Gary S. E.
1994-01-01
A technique is presented that applies modal decomposition to estimate dynamic height (0-450 db) from Expendable BathyThermograph (XBT) temperature profiles. Salinity-Temperature-Depth (STD) data are used to establish empirical relationships between vertically integrated temperature profiles and empirical dynamic height modes. These are then applied to XBT data to estimate dynamic height. A standard error of 0.028 dynamic meters is obtained for the waters of the Gulf of Alaska- an ocean region subject to substantial freshwater buoyancy forcing and with a T-S relationship that has considerable scatter. The residual error is a substantial improvement relative to the conventional T-S correlation technique when applied to this region. Systematic errors between estimated and true dynamic height were evaluated. The 20-year-long time series at Ocean Station P (50 deg N, 145 deg W) indicated weak variations in the error interannually, but not seasonally. There were no evident systematic alongshore variations in the error in the ocean boundary current regime near the perimeter of the Alaska gyre. The results prove satisfactory for the purpose of this work, which is to generate dynamic height from XBT data for coanalysis with satellite altimeter data, given that the altimeter height precision is likewise on the order of 2-3 cm. While the technique has not been applied to other ocean regions where the T-S relation has less scatter, it is suggested that it could provide some improvement over previously applied methods, as well.
Application of Petri net based analysis techniques to signal transduction pathways.
Sackmann, Andrea; Heiner, Monika; Koch, Ina
2006-11-02
Signal transduction pathways are usually modelled using classical quantitative methods, which are based on ordinary differential equations (ODEs). However, some difficulties are inherent in this approach. On the one hand, the kinetic parameters involved are often unknown and have to be estimated. With increasing size and complexity of signal transduction pathways, the estimation of missing kinetic data is not possible. On the other hand, ODEs based models do not support any explicit insights into possible (signal-) flows within the network. Moreover, a huge amount of qualitative data is available due to high-throughput techniques. In order to get information on the systems behaviour, qualitative analysis techniques have been developed. Applications of the known qualitative analysis methods concern mainly metabolic networks. Petri net theory provides a variety of established analysis techniques, which are also applicable to signal transduction models. In this context special properties have to be considered and new dedicated techniques have to be designed. We apply Petri net theory to model and analyse signal transduction pathways first qualitatively before continuing with quantitative analyses. This paper demonstrates how to build systematically a discrete model, which reflects provably the qualitative biological behaviour without any knowledge of kinetic parameters. The mating pheromone response pathway in Saccharomyces cerevisiae serves as case study. We propose an approach for model validation of signal transduction pathways based on the network structure only. For this purpose, we introduce the new notion of feasible t-invariants, which represent minimal self-contained subnets being active under a given input situation. Each of these subnets stands for a signal flow in the system. We define maximal common transition sets (MCT-sets), which can be used for t-invariant examination and net decomposition into smallest biologically meaningful functional units. The paper demonstrates how Petri net analysis techniques can promote a deeper understanding of signal transduction pathways. The new concepts of feasible t-invariants and MCT-sets have been proven to be useful for model validation and the interpretation of the biological system behaviour. Whereas MCT-sets provide a decomposition of the net into disjunctive subnets, feasible t-invariants describe subnets, which generally overlap. This work contributes to qualitative modelling and to the analysis of large biological networks by their fully automatic decomposition into biologically meaningful modules.
Application of Petri net based analysis techniques to signal transduction pathways
Sackmann, Andrea; Heiner, Monika; Koch, Ina
2006-01-01
Background Signal transduction pathways are usually modelled using classical quantitative methods, which are based on ordinary differential equations (ODEs). However, some difficulties are inherent in this approach. On the one hand, the kinetic parameters involved are often unknown and have to be estimated. With increasing size and complexity of signal transduction pathways, the estimation of missing kinetic data is not possible. On the other hand, ODEs based models do not support any explicit insights into possible (signal-) flows within the network. Moreover, a huge amount of qualitative data is available due to high-throughput techniques. In order to get information on the systems behaviour, qualitative analysis techniques have been developed. Applications of the known qualitative analysis methods concern mainly metabolic networks. Petri net theory provides a variety of established analysis techniques, which are also applicable to signal transduction models. In this context special properties have to be considered and new dedicated techniques have to be designed. Methods We apply Petri net theory to model and analyse signal transduction pathways first qualitatively before continuing with quantitative analyses. This paper demonstrates how to build systematically a discrete model, which reflects provably the qualitative biological behaviour without any knowledge of kinetic parameters. The mating pheromone response pathway in Saccharomyces cerevisiae serves as case study. Results We propose an approach for model validation of signal transduction pathways based on the network structure only. For this purpose, we introduce the new notion of feasible t-invariants, which represent minimal self-contained subnets being active under a given input situation. Each of these subnets stands for a signal flow in the system. We define maximal common transition sets (MCT-sets), which can be used for t-invariant examination and net decomposition into smallest biologically meaningful functional units. Conclusion The paper demonstrates how Petri net analysis techniques can promote a deeper understanding of signal transduction pathways. The new concepts of feasible t-invariants and MCT-sets have been proven to be useful for model validation and the interpretation of the biological system behaviour. Whereas MCT-sets provide a decomposition of the net into disjunctive subnets, feasible t-invariants describe subnets, which generally overlap. This work contributes to qualitative modelling and to the analysis of large biological networks by their fully automatic decomposition into biologically meaningful modules. PMID:17081284
Differential Decomposition Among Pig, Rabbit, and Human Remains.
Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe
2018-03-30
While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.
Objective fitting of hemoglobin dynamics in traumatic bruises based on temperature depth profiling
NASA Astrophysics Data System (ADS)
Vidovič, Luka; Milanič, Matija; Majaron, Boris
2014-02-01
Pulsed photothermal radiometry (PPTR) allows noninvasive measurement of laser-induced temperature depth profiles. The obtained profiles provide information on depth distribution of absorbing chromophores, such as melanin and hemoglobin. We apply this technique to objectively characterize mass diffusion and decomposition rate of extravasated hemoglobin during the bruise healing process. In present study, we introduce objective fitting of PPTR data obtained over the course of the bruise healing process. By applying Monte Carlo simulation of laser energy deposition and simulation of the corresponding PPTR signal, quantitative analysis of underlying bruise healing processes is possible. Introduction of objective fitting enables an objective comparison between the simulated and experimental PPTR signals. In this manner, we avoid reconstruction of laser-induced depth profiles and thus inherent loss of information in the process. This approach enables us to determine the value of hemoglobin mass diffusivity, which is controversial in existing literature. Such information will be a valuable addition to existing bruise age determination techniques.
Applications of molecular modeling in coal research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, G.A.; Faulon, J.L.
Over the past several years, molecular modeling has been applied to study various characteristics of coal molecular structures. Powerful workstations coupled with molecular force-field-based software packages have been used to study coal and coal-related molecules. Early work involved determination of the minimum-energy three-dimensional conformations of various published coal structures (Given, Wiser, Solomon and Shinn), and the dominant role of van der Waals and hydrogen bonding forces in defining the energy-minimized structures. These studies have been extended to explore various physical properties of coal structures, including density, microporosity, surface area, and fractal dimension. Other studies have related structural characteristics to cross-linkmore » density and have explored small molecule interactions with coal. Finally, recent studies using a structural elucidation (molecular builder) technique have constructed statistically diverse coal structures based on quantitative and qualitative data on coal and its decomposition products. This technique is also being applied to study coalification processes based on postulated coalification chemistry.« less
A compositional approach to building applications in a computational environment
NASA Astrophysics Data System (ADS)
Roslovtsev, V. V.; Shumsky, L. D.; Wolfengagen, V. E.
2014-04-01
The paper presents an approach to creating an applicative computational environment to feature computational processes and data decomposition, and a compositional approach to application building. The approach in question is based on the notion of combinator - both in systems with variable binding (such as λ-calculi) and those allowing programming without variables (combinatory logic style). We present a computation decomposition technique based on objects' structural decomposition, with the focus on computation decomposition. The computational environment's architecture is based on a network with nodes playing several roles simultaneously.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
NASA Astrophysics Data System (ADS)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hua; Ho, Luis C.
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxymore » Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.« less
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
NASA Astrophysics Data System (ADS)
Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat
2015-01-01
Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.
Wavelet domain textual coding of Ottoman script images
NASA Astrophysics Data System (ADS)
Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.
1996-02-01
Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.
NASA Astrophysics Data System (ADS)
Naseer, Muhammad Tayyab; Asim, Shazia
2017-10-01
Unconventional resource shales can play a critical role in economic growth throughout the world. The hydrocarbon potential of faults/fractured shales is the most significant challenge for unconventional prospect generation. The continuous wavelet transforms (CWT) of spectral decomposition (SD) technology is applied for shale gas prospects on high-resolution 3D seismic data from the Miano area in the Indus platform, SW Pakistan. Schmoker' technique reveals high-quality shales with total organic carbon (TOC) of 9.2% distributed in the western regions. The seismic amplitude, root-mean-square (RMS), and most positive curvature attributes show limited ability to resolve the prospective fractured shale components. The CWT is used to identify the hydrocarbon-bearing faulted/fractured compartments encased within the non-hydrocarbon bearing shale units. The hydrocarbon-bearing shales experience higher amplitudes (4694 dB and 3439 dB) than the non-reservoir shales (3290 dB). Cross plots between sweetness, 22 Hz spectral decomposition, and the seismic amplitudes are found more effective tools than the conventional seismic attribute mapping for discriminating the seal and reservoir elements within the incised-valley petroleum system. Rock physics distinguish the productive sediments from the non-productive sediments, suggesting the potential for future shale play exploration.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
NASA Astrophysics Data System (ADS)
Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao
2018-04-01
In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.
NASA Astrophysics Data System (ADS)
Shahzad, Syed Jawad Hussain; Kumar, Ronald Ravinesh; Ali, Sajid; Ameer, Saba
2016-09-01
The interdependence of Greece and other European stock markets and the subsequent portfolio implications are examined in wavelet and variational mode decomposition domain. In applying the decomposition techniques, we analyze the structural properties of data and distinguish between short and long term dynamics of stock market returns. First, the GARCH-type models are fitted to obtain the standardized residuals. Next, different copula functions are evaluated, and based on the conventional information criteria and time varying parameter, Joe-Clayton copula is chosen to model the tail dependence between the stock markets. The short-run lower tail dependence time paths show a sudden increase in comovement during the global financial crises. The results of the long-run dependence suggest that European stock markets have higher interdependence with Greece stock market. Individual country's Value at Risk (VaR) separates the countries into two distinct groups. Finally, the two-asset portfolio VaR measures provide potential markets for Greece stock market investment diversification.
Study on the relevance of some of the description methods for plateau-honed surfaces
NASA Astrophysics Data System (ADS)
Yousfi, M.; Mezghani, S.; Demirci, I.; El Mansori, M.
2014-01-01
Much work has been undertaken in recent years into the determination of a complete parametric description of plateau-honed surfaces with the intention of making a link between the process conditions, the surface topography and the required functional performances. Different advanced techniques (plateau/valleys decomposition using the normalized Abbott-Firestone curve or morphological operators, multiscale decomposition using continuous wavelets transform, etc) were proposed and applied in different studies. This paper re-examines the current state of developments and addresses a discussion on the relevance of the different proposed parameters and characterization methods for plateau-honed surfaces by considering the control loop manufacturing-characterization-function. The relevance of appropriate characterization is demonstrated through two experimental studies. They consider the effect of the most plateau honing process variables (the abrasive grit size and abrasive indentation velocity in finish-honing and the plateau-honing stage duration and pressure) on cylinder liner surface textures and hydrodynamic friction of the ring-pack system.
A New Strategy for ECG Baseline Wander Elimination Using Empirical Mode Decomposition
NASA Astrophysics Data System (ADS)
Shahbakhti, Mohammad; Bagheri, Hamed; Shekarchi, Babak; Mohammadi, Somayeh; Naji, Mohsen
2016-06-01
Electrocardiogram (ECG) signals might be affected by various artifacts and noises that have biological and external sources. Baseline wander (BW) is a low-frequency artifact that may be caused by breathing, body movements and loose sensor contact. In this paper, a novel method based on empirical mode decomposition (EMD) for removal of baseline noise from ECG is presented. When compared to other EMD-based methods, the novelty of this research is to reach the optimized number of decomposed levels for ECG BW de-noising using mean power frequency (MPF), while the reduction of processing time is considered. To evaluate the performance of the proposed method, a fifth-order Butterworth high pass filtering (BHPF) with cut-off frequency at 0.5Hz and wavelet approach are applied. Three performance indices, signal-to-noise ratio (SNR), mean square error (MSE) and correlation coefficient (CC), between pure and filtered signals have been utilized for qualification of presented techniques. Results suggest that the EMD-based method outperforms the other filtering method.
DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Rogers, James L.
1996-01-01
Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.
Cyclic Mario worlds — color-decomposition for one-loop QCD
NASA Astrophysics Data System (ADS)
Kälin, Gregor
2018-04-01
We present a new color decomposition for QCD amplitudes at one-loop level as a generalization of the Del Duca-Dixon-Maltoni and Johansson-Ochirov decomposition at tree level. Starting from a minimal basis of planar primitive amplitudes we write down a color decomposition that is free of linear dependencies among appearing primitive amplitudes or color factors. The conjectured decomposition applies to any number of quark flavors and is independent of the choice of gauge group and matter representation. The results also hold for higher-dimensional or supersymmetric extensions of QCD. We provide expressions for any number of external quark-antiquark pairs and gluons. [Figure not available: see fulltext.
Techniques for Reaeration of Hydropower Releases.
1983-02-01
peak production from air induction through the baffle ring. The other aeration technique at Norris required modifications to the vacuum-breaker system...of Gas Tracers for Reaeration," Jour. Environ. Div., Proc. Amer. Soc. Civil Engr., 104, 215, April. Rathbun, R. E., 1979, "Estimating the Gas and Dye ...or dissolved in the water, and--last but not least--by the decomposition of bottom mud and by oxidation of the decomposition products stirred up out
Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition
NASA Astrophysics Data System (ADS)
Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale
2012-10-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Le, Thien-Phu
2017-10-01
The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.
Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques
2016-07-05
are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.
2018-01-01
In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.
Automatic indexing of compound words based on mutual information for Korean text retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan Koo Kim; Yoo Kun Cho
In this paper, we present an automatic indexing technique for compound words suitable to an aggulutinative language, specifically Korean. Firstly, we present the construction conditions to compose compound words as indexing terms. Also we present the decomposition rules applicable to consecutive nouns to extract all contents of text. Finally we propose a measure to estimate the usefulness of a term, mutual information, to calculate the degree of word association of compound words, based on the information theoretic notion. By applying this method, our system has raised the precision rate of compound words from 72% to 87%.
Partial information decomposition as a spatiotemporal filter.
Flecker, Benjamin; Alford, Wesley; Beggs, John M; Williams, Paul L; Beer, Randall D
2011-09-01
Understanding the mechanisms of distributed computation in cellular automata requires techniques for characterizing the emergent structures that underlie information processing in such systems. Recently, techniques from information theory have been brought to bear on this problem. Building on this work, we utilize the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information. We then propose a new set of filters and demonstrate that they more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1991-01-01
A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Parallel processing approach to transform-based image coding
NASA Astrophysics Data System (ADS)
Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.
1991-06-01
This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.
Frequency-selective quantitation of short-echo time 1H magnetic resonance spectra
NASA Astrophysics Data System (ADS)
Poullet, Jean-Baptiste; Sima, Diana M.; Van Huffel, Sabine; Van Hecke, Paul
2007-06-01
Accurate and efficient filtering techniques are required to suppress large nuisance components present in short-echo time magnetic resonance (MR) spectra. This paper discusses two powerful filtering techniques used in long-echo time MR spectral quantitation, the maximum-phase FIR filter (MP-FIR) and the Hankel-Lanczos Singular Value Decomposition with Partial ReOrthogonalization (HLSVD-PRO), and shows that they can be applied to their more complex short-echo time spectral counterparts. Both filters are validated and compared through extensive simulations. Their properties are discussed. In particular, the capability of MP-FIR for dealing with macromolecular components is emphasized. Although this property does not make a large difference for long-echo time MR spectra, it can be important when quantifying short-echo time spectra.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B
1998-01-01
Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.
Lamb Waves Decomposition and Mode Identification Using Matching Pursuit Method
2009-01-01
Wigner - Ville distribution ( WVD ). However, WVD suffers from severe interferences, called cross-terms. Cross- terms are the area of a time-frequency...transform (STFT), wavelet transform, Wigner - Ville distribution , matching pursuit decomposition, etc. 1 Report Documentation Page Form ApprovedOMB No...MP decomposition using chirplet dictionary was applied to a simulated S0 mode Lamb wave shown previously in Figure 2a. Wigner - Ville distribution of
Zhou, Xuhui; Xu, Xia; Zhou, Guiyao; Luo, Yiqi
2018-02-01
Temperature sensitivity of soil organic carbon (SOC) decomposition is one of the major uncertainties in predicting climate-carbon (C) cycle feedback. Results from previous studies are highly contradictory with old soil C decomposition being more, similarly, or less sensitive to temperature than decomposition of young fractions. The contradictory results are partly from difficulties in distinguishing old from young SOC and their changes over time in the experiments with or without isotopic techniques. In this study, we have conducted a long-term field incubation experiment with deep soil collars (0-70 cm in depth, 10 cm in diameter of PVC tubes) for excluding root C input to examine apparent temperature sensitivity of SOC decomposition under ambient and warming treatments from 2002 to 2008. The data from the experiment were infused into a multi-pool soil C model to estimate intrinsic temperature sensitivity of SOC decomposition and C residence times of three SOC fractions (i.e., active, slow, and passive) using a data assimilation (DA) technique. As active SOC with the short C residence time was progressively depleted in the deep soil collars under both ambient and warming treatments, the residences times of the whole SOC became longer over time. Concomitantly, the estimated apparent and intrinsic temperature sensitivity of SOC decomposition also became gradually higher over time as more than 50% of active SOC was depleted. Thus, the temperature sensitivity of soil C decomposition in deep soil collars was positively correlated with the mean C residence times. However, the regression slope of the temperature sensitivity against the residence time was lower under the warming treatment than under ambient temperature, indicating that other processes also regulated temperature sensitivity of SOC decomposition. These results indicate that old SOC decomposition is more sensitive to temperature than young components, making the old C more vulnerable to future warmer climate. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Lafare, Antoine E. A.; Peach, Denis W.; Hughes, Andrew G.
2016-02-01
The daily groundwater level (GWL) response in the Permo-Triassic Sandstone aquifers in the Eden Valley, England (UK), has been studied using the seasonal trend decomposition by LOESS (STL) technique. The hydrographs from 18 boreholes in the Permo-Triassic Sandstone were decomposed into three components: seasonality, general trend and remainder. The decomposition was analysed first visually, then using tools involving a variance ratio, time-series hierarchical clustering and correlation analysis. Differences and similarities in decomposition pattern were explained using the physical and hydrogeological information associated with each borehole. The Penrith Sandstone exhibits vertical and horizontal heterogeneity, whereas the more homogeneous St Bees Sandstone groundwater hydrographs characterize a well-identified seasonality; however, exceptions can be identified. A stronger trend component is obtained in the silicified parts of the northern Penrith Sandstone, while the southern Penrith, containing Brockram (breccias) Formation, shows a greater relative variability of the seasonal component. Other boreholes drilled as shallow/deep pairs show differences in responses, revealing the potential vertical heterogeneities within the Penrith Sandstone. The differences in bedrock characteristics between and within the Penrith and St Bees Sandstone formations appear to influence the GWL response. The de-seasonalized and de-trended GWL time series were then used to characterize the response, for example in terms of memory effect (autocorrelation analysis). By applying the STL method, it is possible to analyse GWL hydrographs leading to better conceptual understanding of the groundwater flow. Thus, variation in groundwater response can be used to gain insight into the aquifer physical properties and understand differences in groundwater behaviour.
Keough, Natalie; Myburgh, Jolandie; Steyn, Maryna
2017-07-01
Decomposition studies often use pigs as proxies for human cadavers. However, differences in decomposition sequences/rates relative to humans have not been scientifically examined. Descriptions of five main decomposition stages (humans) were developed and refined by Galloway and later by Megyesi. However, whether these changes/processes are alike in pigs is unclear. Any differences can have significant effects when pig models are used for human PMI estimation. This study compared human decomposition models to the changes observed in pigs. Twenty pigs (50-90 kg) were decomposed over five months and decompositional features recorded. Total body scores (TBS) were calculated. Significant differences were observed during early decomposition between pigs and humans. An amended scoring system to be used in future studies was developed. Standards for PMI estimation derived from porcine models may not directly apply to humans and may need adjustment. Porcine models, however, remain valuable to study variables influencing decomposition. © 2016 American Academy of Forensic Sciences.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.
INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P
2012-10-01
It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less
Solventless synthesis, morphology, structure and magnetic properties of iron oxide nanoparticles
NASA Astrophysics Data System (ADS)
Das, Bratati; Kusz, Joachim; Reddy, V. Raghavendra; Zubko, Maciej; Bhattacharjee, Ashis
2017-12-01
In this study we report the solventless synthesis of iron oxide through thermal decomposition of acetyl ferrocene as well as its mixtures with maliec anhydride and characterization of the synthesized product by various comprehensive physical techniques. Morphology, size and structure of the reaction products were investigated by scanning electron microscopy, transmission electron microscopy and X-ray powder diffraction technique, respectively. Physical characterization techniques like FT-IR spectroscopy, dc magnetization study as well as 57Fe Mössbauer spectroscopy were employed to characterize the magnetic property of the product. The results observed from these studies unequivocally established that the synthesized materials are hematite. Thermal decomposition has been studied with the help of thermogravimetry. Reaction pathway for synthesis of hematite has been proposed. It is noted that maliec anhydride in the solid reaction environment as well as the gaseous reaction atmosphere strongly affect the reaction yield as well as the particle size. In general, a method of preparing hematite nanoparticles through solventless thermal decomposition technique using organometallic compounds and the possible use of reaction promoter have been discussed in detail.
NASA Astrophysics Data System (ADS)
Sierra, Carlos A.; Trumbore, Susan E.; Davidson, Eric A.; Vicca, Sara; Janssens, I.
2015-03-01
The sensitivity of soil organic matter decomposition to global environmental change is a topic of prominent relevance for the global carbon cycle. Decomposition depends on multiple factors that are being altered simultaneously as a result of global environmental change; therefore, it is important to study the sensitivity of the rates of soil organic matter decomposition with respect to multiple and interacting drivers. In this manuscript, we present an analysis of the potential response of decomposition rates to simultaneous changes in temperature and moisture. To address this problem, we first present a theoretical framework to study the sensitivity of soil organic matter decomposition when multiple driving factors change simultaneously. We then apply this framework to models and data at different levels of abstraction: (1) to a mechanistic model that addresses the limitation of enzyme activity by simultaneous effects of temperature and soil water content, the latter controlling substrate supply and oxygen concentration for microbial activity; (2) to different mathematical functions used to represent temperature and moisture effects on decomposition in biogeochemical models. To contrast model predictions at these two levels of organization, we compiled different data sets of observed responses in field and laboratory studies. Then we applied our conceptual framework to: (3) observations of heterotrophic respiration at the ecosystem level; (4) laboratory experiments looking at the response of heterotrophic respiration to independent changes in moisture and temperature; and (5) ecosystem-level experiments manipulating soil temperature and water content simultaneously.
NASA Astrophysics Data System (ADS)
Fan, Hong-Yi; Fan, Yue
2002-01-01
By virtue of the technique of integration within an ordered product of operators and the Schmidt decomposition of the entangled state |η〉, we reduce the general projection calculation in the theory of quantum teleportation to a as simple as possible form and present a general formalism for teleportating quantum states of continuous variable. The project supported by National Natural Science Foundation of China and Educational Ministry Foundation of China
Low-rank Atlas Image Analyses in the Presence of Pathologies
Liu, Xiaoxiao; Niethammer, Marc; Kwitt, Roland; Singh, Nikhil; McCormick, Matt; Aylward, Stephen
2015-01-01
We present a common framework, for registering images to an atlas and for forming an unbiased atlas, that tolerates the presence of pathologies such as tumors and traumatic brain injury lesions. This common framework is particularly useful when a sufficient number of protocol-matched scans from healthy subjects cannot be easily acquired for atlas formation and when the pathologies in a patient cause large appearance changes. Our framework combines a low-rank-plus-sparse image decomposition technique with an iterative, diffeomorphic, group-wise image registration method. At each iteration of image registration, the decomposition technique estimates a “healthy” version of each image as its low-rank component and estimates the pathologies in each image as its sparse component. The healthy version of each image is used for the next iteration of image registration. The low-rank and sparse estimates are refined as the image registrations iteratively improve. When that framework is applied to image-to-atlas registration, the low-rank image is registered to a pre-defined atlas, to establish correspondence that is independent of the pathologies in the sparse component of each image. Ultimately, image-to-atlas registrations can be used to define spatial priors for tissue segmentation and to map information across subjects. When that framework is applied to unbiased atlas formation, at each iteration, the average of the low-rank images from the patients is used as the atlas image for the next iteration, until convergence. Since each iteration’s atlas is comprised of low-rank components, it provides a population-consistent, pathology-free appearance. Evaluations of the proposed methodology are presented using synthetic data as well as simulated and clinical tumor MRI images from the brain tumor segmentation (BRATS) challenge from MICCAI 2012. PMID:26111390
NASA Astrophysics Data System (ADS)
Aliouane, Leila; Ouadfeul, Sid-Ali; Rabhi, Abdessalem; Rouina, Fouzi; Benaissa, Zahia; Boudella, Amar
2013-04-01
The main goal of this work is to realize a comparison between two lithofacies segmentation techniques of reservoir interval. The first one is based on the Kohonen's Self-Organizing Map neural network machine. The second technique is based on the Walsh transform decomposition. Application to real well-logs data of two boreholes located in the Algerian Sahara shows that the Self-organizing map is able to provide more lithological details that the obtained lithofacies model given by the Walsh decomposition. Keywords: Comparison, Lithofacies, SOM, Walsh References: 1)Aliouane, L., Ouadfeul, S., Boudella, A., 2011, Fractal analysis based on the continuous wavelet transform and lithofacies classification from well-logs data using the self-organizing map neural network, Arabian Journal of geosciences, doi: 10.1007/s12517-011-0459-4 2) Aliouane, L., Ouadfeul, S., Djarfour, N., Boudella, A., 2012, Petrophysical Parameters Estimation from Well-Logs Data Using Multilayer Perceptron and Radial Basis Function Neural Networks, Lecture Notes in Computer Science Volume 7667, 2012, pp 730-736, doi : 10.1007/978-3-642-34500-5_86 3)Ouadfeul, S. and Aliouane., L., 2011, Multifractal analysis revisited by the continuous wavelet transform applied in lithofacies segmentation from well-logs data, International journal of applied physics and mathematics, Vol01 N01. 4) Ouadfeul, S., Aliouane, L., 2012, Lithofacies Classification Using the Multilayer Perceptron and the Self-organizing Neural Networks, Lecture Notes in Computer Science Volume 7667, 2012, pp 737-744, doi : 10.1007/978-3-642-34500-5_87 5) Weisstein, Eric W. "Fast Walsh Transform." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/FastWalshTransform.html
Constrained reduced-order models based on proper orthogonal decomposition
Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...
2017-04-09
A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.
TE/TM decomposition of electromagnetic sources
NASA Technical Reports Server (NTRS)
Lindell, Ismo V.
1988-01-01
Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.
Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.
2009-01-01
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727
Bahri, A; Bendersky, M; Cohen, F R; Gitler, S
2009-07-28
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.
Students' Understanding of Quadratic Equations
ERIC Educational Resources Information Center
López, Jonathan; Robles, Izraim; Martínez-Planell, Rafael
2016-01-01
Action-Process-Object-Schema theory (APOS) was applied to study student understanding of quadratic equations in one variable. This required proposing a detailed conjecture (called a genetic decomposition) of mental constructions students may do to understand quadratic equations. The genetic decomposition which was proposed can contribute to help…
A unified material decomposition framework for quantitative dual- and triple-energy CT imaging.
Zhao, Wei; Vernekohl, Don; Han, Fei; Han, Bin; Peng, Hao; Yang, Yong; Xing, Lei; Min, James K
2018-04-21
Many clinical applications depend critically on the accurate differentiation and classification of different types of materials in patient anatomy. This work introduces a unified framework for accurate nonlinear material decomposition and applies it, for the first time, in the concept of triple-energy CT (TECT) for enhanced material differentiation and classification as well as dual-energy CT (DECT). We express polychromatic projection into a linear combination of line integrals of material-selective images. The material decomposition is then turned into a problem of minimizing the least-squares difference between measured and estimated CT projections. The optimization problem is solved iteratively by updating the line integrals. The proposed technique is evaluated by using several numerical phantom measurements under different scanning protocols. The triple-energy data acquisition is implemented at the scales of micro-CT and clinical CT imaging with commercial "TwinBeam" dual-source DECT configuration and a fast kV switching DECT configuration. Material decomposition and quantitative comparison with a photon counting detector and with the presence of a bow-tie filter are also performed. The proposed method provides quantitative material- and energy-selective images examining realistic configurations for both DECT and TECT measurements. Compared to the polychromatic kV CT images, virtual monochromatic images show superior image quality. For the mouse phantom, quantitative measurements show that the differences between gadodiamide and iodine concentrations obtained using TECT and idealized photon counting CT (PCCT) are smaller than 8 and 1 mg/mL, respectively. TECT outperforms DECT for multicontrast CT imaging and is robust with respect to spectrum estimation. For the thorax phantom, the differences between the concentrations of the contrast map and the corresponding true reference values are smaller than 7 mg/mL for all of the realistic configurations. A unified framework for both DECT and TECT imaging has been established for the accurate extraction of material compositions using currently available commercial DECT configurations. The novel technique is promising to provide an urgently needed solution for several CT-based diagnostic and therapy applications, especially for the diagnosis of cardiovascular and abdominal diseases where multicontrast imaging is involved. © 2018 American Association of Physicists in Medicine.
Prediction of drag at subsonic and transonic speeds using Euler methods
NASA Technical Reports Server (NTRS)
Nikfetrat, K.; Van Dam, C. P.; Vijgen, P. M. H. W.; Chang, I. C.
1992-01-01
A technique for the evaluation of aerodynamic drag from flowfield solutions based on the Euler equations is discussed. The technique is limited to steady attached flows around three-dimensional configurations in the absence of active systems such as surface blowing/suction and propulsion. It allows the decomposition of the total drag into induced drag and wave drag and, consequently, it provides more information on the drag sources than the conventional surface-pressure integration technique. The induced drag is obtained from the integration of the kinetic energy (per unit distance) of the trailing vortex system on a wake plane and the wave drag is obtained from the integration of the entropy production on a plane just downstream of the shocks. The drag-evaluation technique is applied to three-dimensional flowfield solutions for the ONERA M6 wing as well as an aspect-ratio-7 wing with an elliptic spanwise chord distribution and an NACA-0012 section shape. Comparisons between the drag obtained with the present technique and the drag based on the integration of surface pressures are presented for two Euler codes.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
ERIC Educational Resources Information Center
Koga, Nobuyoshi; Goshi, Yuri; Yoshikawa, Masahiro; Tatsuoka, Tomoyuki
2014-01-01
An undergraduate kinetic experiment of the thermal decomposition of solids by microscopic observation and thermal analysis was developed by investigating a suitable reaction, applicable techniques of thermal analysis and microscopic observation, and a reliable kinetic calculation method. The thermal decomposition of sodium hydrogen carbonate is…
Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.
Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio; Ross, Kevin
2009-01-01
Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.
Non-invasive quantitative pulmonary V/Q imaging using Fourier decomposition MRI at 1.5T.
Kjørstad, Åsmund; Corteville, Dominique M R; Henzler, Thomas; Schmid-Bindert, Gerald; Zöllner, Frank G; Schad, Lothar R
2015-12-01
Techniques for quantitative pulmonary perfusion and ventilation using the Fourier Decomposition method were recently demonstrated. We combine these two techniques and show that ventilation-perfusion (V/Q) imaging is possible using only a single MR acquisition of less than thirty seconds. The Fourier Decomposition method is used in combination with two quantification techniques, which extract baselines from within the images themselves and thus allows quantification. For the perfusion, a region assumed to consist of 100% blood is utilized, while for the ventilation the zero-frequency component is used. V/Q-imaging is then done by dividing the quantified ventilation map with the quantified perfusion map. The techniques were used on ten healthy volunteers and fifteen patients diagnosed with lung cancer. A mean V/Q-ratio of 1.15 ± 0.22 was found for the healthy volunteers and a mean V/Q-ratio of 1.93 ± 0.83 for the non-afflicted lung in the patients. Mean V/Q-ratio in the afflicted (tumor-bearing) lung was found to be 1.61 ± 1.06. Functional defects were clearly visible in many of the patient images, but 5 of 15 patient images had to be excluded due to artifacts or low SNR, indicating a lack of robustness. Non-invasive, quantitative V/Q-imaging is possible using Fourier Decomposition MRI. The method requires only a single acquisition of less than 30 seconds, but robustness in patients remains an issue. Copyright © 2015. Published by Elsevier GmbH.
Learning inverse kinematics: reduced sampling through decomposition into virtual robots.
de Angulo, Vicente Ruiz; Torras, Carme
2008-12-01
We propose a technique to speedup the learning of the inverse kinematics of a robot manipulator by decomposing it into two or more virtual robot arms. Unlike previous decomposition approaches, this one does not place any requirement on the robot architecture, and thus, it is completely general. Parametrized self-organizing maps are particularly adequate for this type of learning, and permit comparing results directly obtained and through the decomposition. Experimentation shows that time reductions of up to two orders of magnitude are easily attained.
The Use of Decompositions in International Trade Textbooks.
ERIC Educational Resources Information Center
Highfill, Jannett K.; Weber, William V.
1994-01-01
Asserts that international trade, as compared with international finance or even international economics, is primarily an applied microeconomics field. Discusses decomposition analysis in relation to international trade and tariffs. Reports on an evaluation of the treatment of this topic in eight college-level economics textbooks. (CFR)
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1993-01-01
A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
NASA Astrophysics Data System (ADS)
Crockett, R. G. M.; Perrier, F.; Richon, P.
2009-04-01
Building on independent investigations by research groups at both IPGP, France, and the University of Northampton, UK, hourly-sampled radon time-series of durations exceeding one year have been investigated for periodic and anomalous phenomena using a variety of established and novel techniques. These time-series have been recorded in locations having no routine human behaviour and thus are effectively free of significant anthropogenic influences. With regard to periodic components, the long durations of these time-series allow, in principle, very high frequency resolutions for established spectral-measurement techniques such as Fourier and maximum-entropy. However, as has been widely observed, the stochastic nature of radon emissions from rocks and soils, coupled with sensitivity to a wide variety influences such as temperature, wind-speed and soil moisture-content has made interpretation of the results obtained by such techniques very difficult, with uncertain results, in many cases. We here report developments in the investigation of radon-time series for periodic and anomalous phenomena using spectral-decomposition techniques. These techniques, in variously separating ‘high', ‘middle' and ‘low' frequency components, effectively ‘de-noise' the data by allowing components of interest to be isolated from others which (might) serve to obscure weaker information-containing components. Once isolated, these components can be investigated using a variety of techniques. Whilst this is very much work in early stages of development, spectral decomposition methods have been used successfully to indicate the presence of diurnal and sub-diurnal cycles in radon concentration which we provisionally attribute to tidal influences. Also, these methods have been used to enhance the identification of short-duration anomalies, attributable to a variety of causes including, for example, earthquakes and rapid large-magnitude changes in weather conditions. Keywords: radon; earthquakes; tidal-influences; anomalies; time series; spectral-decomposition.
Reconstructing Past Admixture Processes from Local Genomic Ancestry Using Wavelet Transformation
Sanderson, Jean; Sudoyo, Herawati; Karafet, Tatiana M.; Hammer, Michael F.; Cox, Murray P.
2015-01-01
Admixture between long-separated populations is a defining feature of the genomes of many species. The mosaic block structure of admixed genomes can provide information about past contact events, including the time and extent of admixture. Here, we describe an improved wavelet-based technique that better characterizes ancestry block structure from observed genomic patterns. principal components analysis is first applied to genomic data to identify the primary population structure, followed by wavelet decomposition to develop a new characterization of local ancestry information along the chromosomes. For testing purposes, this method is applied to human genome-wide genotype data from Indonesia, as well as virtual genetic data generated using genome-scale sequential coalescent simulations under a wide range of admixture scenarios. Time of admixture is inferred using an approximate Bayesian computation framework, providing robust estimates of both admixture times and their associated levels of uncertainty. Crucially, we demonstrate that this revised wavelet approach, which we have released as the R package adwave, provides improved statistical power over existing wavelet-based techniques and can be used to address a broad range of admixture questions. PMID:25852078
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakai, H.; Yako, K.
2009-08-26
Angular distributions of the double differential cross sections for the {sup 48}Ca(p,n) and the {sup 48}Ti(n,p) reactions were measured at 300 MeV. A multipole decomposition technique was applied to the spectra to extract the Gamow-Teller (GT) transition strengths. In the (n, p) spectrum beyond 8 MeV excitation energy extra B(GT{sup +}) strengths which are not predicted by the shell model calculation. This extra B(GT{sup +}) strengths significantly contribute to the nuclear matrix element of the 2v2{beta}-decay.
Through Wall Radar Classification of Human Micro-Doppler Using Singular Value Decomposition Analysis
Ritchie, Matthew; Ash, Matthew; Chen, Qingchao; Chetty, Kevin
2016-01-01
The ability to detect the presence as well as classify the activities of individuals behind visually obscuring structures is of significant benefit to police, security and emergency services in many situations. This paper presents the analysis from a series of experimental results generated using a through-the-wall (TTW) Frequency Modulated Continuous Wave (FMCW) C-Band radar system named Soprano. The objective of this analysis was to classify whether an individual was carrying an item in both hands or not using micro-Doppler information from a FMCW sensor. The radar was deployed at a standoff distance, of approximately 0.5 m, outside a residential building and used to detect multiple people walking within a room. Through the application of digital filtering, it was shown that significant suppression of the primary wall reflection is possible, significantly enhancing the target signal to clutter ratio. Singular Value Decomposition (SVD) signal processing techniques were then applied to the micro-Doppler signatures from different individuals. Features from the SVD information have been used to classify whether the person was carrying an item or walking free handed. Excellent performance of the classifier was achieved in this challenging scenario with accuracies up to 94%, suggesting that future through wall radar sensors may have the ability to reliably recognize many different types of activities in TTW scenarios using these techniques. PMID:27589760
Ritchie, Matthew; Ash, Matthew; Chen, Qingchao; Chetty, Kevin
2016-08-31
The ability to detect the presence as well as classify the activities of individuals behind visually obscuring structures is of significant benefit to police, security and emergency services in many situations. This paper presents the analysis from a series of experimental results generated using a through-the-wall (TTW) Frequency Modulated Continuous Wave (FMCW) C-Band radar system named Soprano. The objective of this analysis was to classify whether an individual was carrying an item in both hands or not using micro-Doppler information from a FMCW sensor. The radar was deployed at a standoff distance, of approximately 0.5 m, outside a residential building and used to detect multiple people walking within a room. Through the application of digital filtering, it was shown that significant suppression of the primary wall reflection is possible, significantly enhancing the target signal to clutter ratio. Singular Value Decomposition (SVD) signal processing techniques were then applied to the micro-Doppler signatures from different individuals. Features from the SVD information have been used to classify whether the person was carrying an item or walking free handed. Excellent performance of the classifier was achieved in this challenging scenario with accuracies up to 94%, suggesting that future through wall radar sensors may have the ability to reliably recognize many different types of activities in TTW scenarios using these techniques.
Application of Conjugate Gradient methods to tidal simulation
Barragy, E.; Carey, G.F.; Walters, R.A.
1993-01-01
A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.
Control of birhythmicity: A self-feedback approach
NASA Astrophysics Data System (ADS)
Biswas, Debabrata; Banerjee, Tanmoy; Kurths, Jürgen
2017-06-01
Birhythmicity occurs in many natural and artificial systems. In this paper, we propose a self-feedback scheme to control birhythmicity. To establish the efficacy and generality of the proposed control scheme, we apply it on three birhythmic oscillators from diverse fields of natural science, namely, an energy harvesting system, the p53-Mdm2 network for protein genesis (the OAK model), and a glycolysis model (modified Decroly-Goldbeter model). Using the harmonic decomposition technique and energy balance method, we derive the analytical conditions for the control of birhythmicity. A detailed numerical bifurcation analysis in the parameter space establishes that the control scheme is capable of eliminating birhythmicity and it can also induce transitions between different forms of bistability. As the proposed control scheme is quite general, it can be applied for control of several real systems, particularly in biochemical and engineering systems.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Use of zerotree coding in a high-speed pyramid image multiresolution decomposition
NASA Astrophysics Data System (ADS)
Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo
1995-03-01
A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.
Postmortem validation of breast density using dual-energy mammography
Molloi, Sabee; Ducote, Justin L.; Ding, Huanjun; Feig, Stephen A.
2014-01-01
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decomposition was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer. PMID:25086548
Developing Classroom Research Modules Through In Depth Understanding of the Research Process
NASA Astrophysics Data System (ADS)
Guilbert, K.; Soong, J.; Cotrufo, M.
2012-12-01
Students of low income families often have fewer opportunities, especially in regards to science, than their peers of higher socioeconomic upbringing. This opportunity deficit can stifle their interest in science before it begins. As an elementary teacher at a Title 1 school, I strive to enrich the scientific opportunities for my students. I gained exposure to soil science by participating in a litter decomposition experiment and the Summer Soil Institute at Colorado State University through an NSF funded Research Experience for Teachers program (RET). My participation in the RET provided me with the tools necessary to implement in depth research in my 5th grade classroom. A teacher's greatest tool is having a deep understanding of a topic prior to relaying it to students. This depth of knowledge needs to be coupled with a general understanding of the research process and techniques that are being used by contemporary scientists. Applying these ideas, I created a long-term decomposition module for my students that can be used as a model for teachers to create meaningful research opportunities for students.
Urban Land: Study of Surface Run-off Composition and Its Dynamics
NASA Astrophysics Data System (ADS)
Palagin, E. D.; Gridneva, M. A.; Bykova, P. G.
2017-11-01
The qualitative composition of urban land surface run-off is liable to significant variations. To study surface run-off dynamics, to examine its behaviour and to discover reasons of these variations, it is relevant to use the mathematical apparatus technique of time series analysis. A seasonal decomposition procedure was applied to a temporary series of monthly dynamics with the annual frequency of seasonal variations in connection with a multiplicative model. The results of the quantitative chemical analysis of surface wastewater of the 22nd Partsjezd outlet in Samara for the period of 2004-2016 were used as basic data. As a result of the analysis, a seasonal pattern of variations in the composition of surface run-off in Samara was identified. Seasonal indices upon 15 waste-water quality indicators were defined. BOD (full), suspended materials, mineralization, chlorides, sulphates, ammonium-ion, nitrite-anion, nitrate-anion, phosphates (phosphorus), iron general, copper, zinc, aluminium, petroleum products, synthetic surfactants (anion-active). Based on the seasonal decomposition of the time series data, the contribution of trends, seasonal and accidental components of the variability of the surface run-off indicators was estimated.
Geometric decompositions of collective motion
NASA Astrophysics Data System (ADS)
Mischiati, Matteo; Krishnaprasad, P. S.
2017-04-01
Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.
NASA Astrophysics Data System (ADS)
Ghoraani, Behnaz; Krishnan, Sridhar
2009-12-01
The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.
Geometric decompositions of collective motion
Krishnaprasad, P. S.
2017-01-01
Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes—including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots. PMID:28484319
X-ray Thomson Scattering in Warm Dense Matter without the Chihara Decomposition.
Baczewski, A D; Shulenburger, L; Desjarlais, M P; Hansen, S B; Magyar, R J
2016-03-18
X-ray Thomson scattering is an important experimental technique used to measure the temperature, ionization state, structure, and density of warm dense matter (WDM). The fundamental property probed in these experiments is the electronic dynamic structure factor. In most models, this is decomposed into three terms [J. Chihara, J. Phys. F 17, 295 (1987)] representing the response of tightly bound, loosely bound, and free electrons. Accompanying this decomposition is the classification of electrons as either bound or free, which is useful for gapped and cold systems but becomes increasingly questionable as temperatures and pressures increase into the WDM regime. In this work we provide unambiguous first principles calculations of the dynamic structure factor of warm dense beryllium, independent of the Chihara form, by treating bound and free states under a single formalism. The computational approach is real-time finite-temperature time-dependent density functional theory (TDDFT) being applied here for the first time to WDM. We compare results from TDDFT to Chihara-based calculations for experimentally relevant conditions in shock-compressed beryllium.
Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry
NASA Astrophysics Data System (ADS)
Griff Freeman, R.; McCurdy, David L.
1998-08-01
A shortcoming of many undergraduate classes in analytical chemistry is that students receive little exposure to sample preparation in chemical analysis. This paper reports the progress made in introducing microwave sample decomposition into several quantitative analysis experiments at Truman State University. Two experiments being performed in our current laboratory rotation include closed vessel microwave decomposition applied to the classical gravimetric determination of nickel and the determination of sodium in snack foods by flame atomic emission spectrometry. A third lab, using open-vessel microwave decomposition for the Kjeldahl nitrogen determination is now ready for student trial. Microwave decomposition reduces the time needed to complete these experiments and significantly increases the student awareness of the importance of sample preparation in quantitative chemical analyses, providing greater breadth and realism in the experiments.
Video rate morphological processor based on a redundant number representation
NASA Astrophysics Data System (ADS)
Kuczborski, Wojciech; Attikiouzel, Yianni; Crebbin, Gregory A.
1992-03-01
This paper presents a video rate morphological processor for automated visual inspection of printed circuit boards, integrated circuit masks, and other complex objects. Inspection algorithms are based on gray-scale mathematical morphology. Hardware complexity of the known methods of real-time implementation of gray-scale morphology--the umbra transform and the threshold decomposition--has prompted us to propose a novel technique which applied an arithmetic system without carrying propagation. After considering several arithmetic systems, a redundant number representation has been selected for implementation. Two options are analyzed here. The first is a pure signed digit number representation (SDNR) with the base of 4. The second option is a combination of the base-2 SDNR (to represent gray levels of images) and the conventional twos complement code (to represent gray levels of structuring elements). Operation principle of the morphological processor is based on the concept of the digit level systolic array. Individual processing units and small memory elements create a pipeline. The memory elements store current image windows (kernels). All operation primitives of processing units apply a unified direction of digit processing: most significant digit first (MSDF). The implementation technology is based on the field programmable gate arrays by Xilinx. This paper justified the rationality of a new approach to logic design, which is the decomposition of Boolean functions instead of Boolean minimization.
Standing wave contributions to the linear interference effect in stratosphere-troposphere coupling
NASA Astrophysics Data System (ADS)
Watt-Meyer, Oliver; Kushner, Paul
2014-05-01
A body of literature by Hayashi and others [Hayashi 1973, 1977, 1979; Pratt, 1976] developed a decomposition of the wavenumber-frequency spectrum into standing and travelling waves. These techniques directly decompose the power spectrum—that is, the amplitudes squared—into standing and travelling parts. This, incorrectly, does not allow for a term representing the covariance between these waves. We propose a simple decomposition based on the 2D Fourier transform which allows one to directly compute the variance of the standing and travelling waves, as well as the covariance between them. Applying this decomposition to geopotential height anomalies in the Northern Hemisphere winter, we show the dominance of standing waves for planetary wavenumbers 1 through 3, especially in the stratosphere, and that wave-1 anomalies have a significant westward travelling component in the high-latitude (60N to 80N) troposphere. Variations in the relative zonal phasing between a wave anomaly and the background climatological wave pattern—the "linear interference" effect—are known to explain a large part of the planetary wave driving of the polar stratosphere in both hemispheres. While the linear interference effect is robust across observations, models of varying degrees of complexity, and in response to various types of perturbations, it is not well understood dynamically. We use the above-described decomposition into standing and travelling waves to investigate the drivers of linear interference. We find that the linear part of the wave activity flux is primarily driven by the standing waves, at all vertical levels. This can be understood by noting that the longitudinal positions of the antinodes of the standing waves are typically close to being aligned with the maximum and minimum of the background climatology. We discuss implications for predictability of wave activity flux, and hence polar vortex strength variability.
Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long
2018-03-05
Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Li, Xiaoyan; Holobar, Ales; Gazzoni, Marco; Merletti, Roberto; Rymer, William Zev; Zhou, Ping
2015-05-01
Recent advances in high-density surface electromyogram (EMG) decomposition have made it a feasible task to discriminate single motor unit activity from surface EMG interference patterns, thus providing a noninvasive approach for examination of motor unit control properties. In the current study, we applied high-density surface EMG recording and decomposition techniques to assess motor unit firing behavior alterations poststroke. Surface EMG signals were collected using a 64-channel 2-D electrode array from the paretic and contralateral first dorsal interosseous (FDI) muscles of nine hemiparetic stroke subjects at different isometric discrete contraction levels between 2 to 10 N with a 2 N increment step. Motor unit firing rates were extracted through decomposition of the high-density surface EMG signals and compared between paretic and contralateral muscles. Across the nine tested subjects, paretic FDI muscles showed decreased motor unit firing rates compared with contralateral muscles at different contraction levels. Regression analysis indicated a linear relation between the mean motor unit firing rate and the muscle contraction level for both paretic and contralateral muscles (p < 0.001), with the former demonstrating a lower increment rate (0.32 pulses per second (pps)/N) compared with the latter (0.67 pps/N). The coefficient of variation (averaged over the contraction levels) of the motor unit firing rates for the paretic muscles (0.21 ± 0.012) was significantly higher than for the contralateral muscles (0.17 ± 0.014) (p < 0.05). This study provides direct evidence of motor unit firing behavior alterations poststroke using surface EMG, which can be an important factor contributing to hemiparetic muscle weakness.
Li, Xiaoyan; Holobar, Aleš; Gazzoni, Marco; Merletti, Roberto; Rymer, William Z.; Zhou, Ping
2014-01-01
Recent advances in high density surface electromyogram (EMG) decomposition have made it a feasible task to discriminate single motor unit activity from surface EMG interference patterns, thus providing a noninvasive approach for examination of motor unit control properties. In the current study we applied high density surface EMG recording and decomposition techniques to assess motor unit firing behavior alterations post-stroke. Surface EMG signals were collected using a 64-channel 2-dimensional electrode array from the paretic and contralateral first dorsal interosseous (FDI) muscles of nine hemiparetic stroke subjects at different isometric discrete contraction levels between 2 N to 10 N with a 2 N increment step. Motor unit firing rates were extracted through decomposition of the high density surface EMG signals, and compared between paretic and contralateral muscles. Across the nine tested subjects, paretic FDI muscles showed decreased motor unit firing rates compared with contralateral muscles at different contraction levels. Regression analysis indicated a linear relation between the mean motor unit firing rate and the muscle contraction level for both paretic and contralateral muscles (p < 0.001), with the former demonstrating a lower increment rate (0.32 pulses per second (pps)/N) compared with the latter (0.67 pps/N). The coefficient of variation (CoV, averaged over the contraction levels) of the motor unit firing rates for the paretic muscles (0.21 ± 0.012) was significantly higher than for the contralateral muscles (0.17 ± 0.014) (p < 0.05). This study provides direct evidence of motor unit firing behavior alterations post-stroke using surface EMG, which can be an important factor contributing to hemiparetic muscle weakness. PMID:25389239
NASA Technical Reports Server (NTRS)
Chavez, Patrick F.
1987-01-01
The effort at Sandia National Labs. on the methodologies and techniques being used to generate strict hexahedral finite element meshes from a solid model is described. The functionality of the modeler is used to decompose the solid into a set of nonintersecting meshable finite element primitives. The description of the decomposition is exported, via a Boundary Representative format, to the meshing program which uses the information for complete finite element model specification. Particular features of the program are discussed in some detail along with future plans for development which includes automation of the decomposition using artificial intelligence techniques.
Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M
2014-01-01
This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.
Fast Sampling Gas Chromatography (GC) System for Speciation in a Shock Tube
2016-10-31
capture similar ethylene decomposition rates for temperature-dependent shock experiments. (a) Papers published in peer-reviewed journals (N/A for none...3 GC Sampling System Validation Experiments ............................................................................... 5 Ethylene ...results for cold shock experiments, and both techniques capture similar ethylene decomposition rates for temperature-dependent shock experiments. Problem
Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.
Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby
2018-02-06
Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Image-based spectral distortion correction for photon-counting x-ray detectors
Ding, Huanjun; Molloi, Sabee
2012-01-01
Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608
NASA Astrophysics Data System (ADS)
Li, Yongfu; Chen, Na; Harmon, Mark E.; Li, Yuan; Cao, Xiaoyan; Chappell, Mark A.; Mao, Jingdong
2015-10-01
A feedback between decomposition and litter chemical composition occurs with decomposition altering composition that in turn influences the decomposition rate. Elucidating the temporal pattern of chemical composition is vital to understand this feedback, but the effects of plant species and climate on chemical changes remain poorly understood, especially over multiple years. In a 10-year decomposition experiment with litter of four species (Acer saccharum, Drypetes glauca, Pinus resinosa, and Thuja plicata) from four sites that range from the arctic to tropics, we determined the abundance of 11 litter chemical constituents that were grouped into waxes, carbohydrates, lignin/tannins, and proteins/peptides using advanced 13C solid-state NMR techniques. Decomposition generally led to an enrichment of waxes and a depletion of carbohydrates, whereas the changes of other chemical constituents were inconsistent. Inconsistent convergence in chemical compositions during decomposition was observed among different litter species across a range of site conditions, whereas one litter species converged under different climate conditions. Our data clearly demonstrate that plant species rather than climate greatly alters the temporal pattern of litter chemical composition, suggesting the decomposition-chemistry feedback varies among different plant species.
Li, Yongfu; Chen, Na; Harmon, Mark E.; Li, Yuan; Cao, Xiaoyan; Chappell, Mark A.; Mao, Jingdong
2015-01-01
A feedback between decomposition and litter chemical composition occurs with decomposition altering composition that in turn influences the decomposition rate. Elucidating the temporal pattern of chemical composition is vital to understand this feedback, but the effects of plant species and climate on chemical changes remain poorly understood, especially over multiple years. In a 10-year decomposition experiment with litter of four species (Acer saccharum, Drypetes glauca, Pinus resinosa, and Thuja plicata) from four sites that range from the arctic to tropics, we determined the abundance of 11 litter chemical constituents that were grouped into waxes, carbohydrates, lignin/tannins, and proteins/peptides using advanced 13C solid-state NMR techniques. Decomposition generally led to an enrichment of waxes and a depletion of carbohydrates, whereas the changes of other chemical constituents were inconsistent. Inconsistent convergence in chemical compositions during decomposition was observed among different litter species across a range of site conditions, whereas one litter species converged under different climate conditions. Our data clearly demonstrate that plant species rather than climate greatly alters the temporal pattern of litter chemical composition, suggesting the decomposition-chemistry feedback varies among different plant species. PMID:26515033
Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.
Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani
2015-02-01
The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Buchanan, Piers; Soper, Alan K; Thompson, Helen; Westacott, Robin E; Creek, Jefferson L; Hobson, Greg; Koh, Carolyn A
2005-10-22
Neutron diffraction with HD isotope substitution has been used to study the formation and decomposition of the methane clathrate hydrate. Using this atomistic technique coupled with simultaneous gas consumption measurements, we have successfully tracked the formation of the sI methane hydrate from a water/gas mixture and then the subsequent decomposition of the hydrate from initiation to completion. These studies demonstrate that the application of neutron diffraction with simultaneous gas consumption measurements provides a powerful method for studying the clathrate hydrate crystal growth and decomposition. We have also used neutron diffraction to examine the water structure before the hydrate growth and after the hydrate decomposition. From the neutron-scattering curves and the empirical potential structure refinement analysis of the data, we find that there is no significant difference between the structure of water before the hydrate formation and the structure of water after the hydrate decomposition. Nor is there any significant change to the methane hydration shell. These results are discussed in the context of widely held views on the existence of memory effects after the hydrate decomposition.
The trait contribution to wood decomposition rates of 15 Neotropical tree species.
van Geffen, Koert G; Poorter, Lourens; Sass-Klaassen, Ute; van Logtestijn, Richard S P; Cornelissen, Johannes H C
2010-12-01
The decomposition of dead wood is a critical uncertainty in models of the global carbon cycle. Despite this, relatively few studies have focused on dead wood decomposition, with a strong bias to higher latitudes. Especially the effect of interspecific variation in species traits on differences in wood decomposition rates remains unknown. In order to fill these gaps, we applied a novel method to study long-term wood decomposition of 15 tree species in a Bolivian semi-evergreen tropical moist forest. We hypothesized that interspecific differences in species traits are important drivers of variation in wood decomposition rates. Wood decomposition rates (fractional mass loss) varied between 0.01 and 0.31 yr(-1). We measured 10 different chemical, anatomical, and morphological traits for all species. The species' average traits were useful predictors of wood decomposition rates, particularly the average diameter (dbh) of the tree species (R2 = 0.41). Lignin concentration further increased the proportion of explained inter-specific variation in wood decomposition (both negative relations, cumulative R2 = 0.55), although it did not significantly explain variation in wood decomposition rates if considered alone. When dbh values of the actual dead trees sampled for decomposition rate determination were used as a predictor variable, the final model (including dead tree dbh and lignin concentration) explained even more variation in wood decomposition rates (R2 = 0.71), underlining the importance of dbh in wood decomposition. Other traits, including wood density, wood anatomical traits, macronutrient concentrations, and the amount of phenolic extractives could not significantly explain the variation in wood decomposition rates. The surprising results of this multi-species study, in which for the first time a large set of traits is explicitly linked to wood decomposition rates, merits further testing in other forest ecosystems.
Decomposition of aquatic plants in lakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godshalk, G.L.
1977-01-01
This study was carried out to systematically determine the effects of temperature and oxygen concentration, two environmental parameters crucial to lake metabolism in general, on decomposition of five species of aquatic vascular plants of three growth forms in a Michigan lake. Samples of dried plant material were decomposed in flasks in the laboratory under three different oxygen regimes, aerobic-to-anaerobic, strict anaerobic, and aerated, each at 10/sup 0/C and 25/sup 0/C. In addition, in situ decomposition of the same species was monitored using the litter bag technique under four conditions.
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
Mössbauer study of the thermal decomposition of alkali tris(oxalato)ferrates(III)
NASA Astrophysics Data System (ADS)
Brar, A. S.; Randhawa, B. S.
1985-07-01
The thermal decomposition of alkali (Li,Na,K,Cs,NH 4) tris(oxalato)ferrates(III) has been studied at different temperatures up to 700°C using Mössbauer, infrared spectroscopy, and thermogravimetric techniques. The formation of different intermediates has been observed during thermal decomposition. The decomposition in these complexes starts at different temperatures, i.e., at 200°C in the case of lithium, cesium, and ammonium ferrate(III), 250°C in the case of sodium, and 270°C in the case of potassium tris(oxalato)ferrate(III). The intermediates, i.e., Fe 11C 2O 4, K 6Fe 112(ox) 5. and Cs 2Fe 11 (ox) 2(H 2O) 2, are formed during thermal decomposition of lithium, potassium, and cesium tris(oxalato)ferrates(III), respectively. In the case of sodium and ammonium tris(oxalato)ferrates(III), the decomposition occurs without reduction to the iron(II) state and leads directly to α-Fe 2O 3.
Decomposition-Based Decision Making for Aerospace Vehicle Design
NASA Technical Reports Server (NTRS)
Borer, Nicholas K.; Mavris, DImitri N.
2005-01-01
Most practical engineering systems design problems have multiple and conflicting objectives. Furthermore, the satisfactory attainment level for each objective ( requirement ) is likely uncertain early in the design process. Systems with long design cycle times will exhibit more of this uncertainty throughout the design process. This is further complicated if the system is expected to perform for a relatively long period of time, as now it will need to grow as new requirements are identified and new technologies are introduced. These points identify a need for a systems design technique that enables decision making amongst multiple objectives in the presence of uncertainty. Traditional design techniques deal with a single objective or a small number of objectives that are often aggregates of the overarching goals sought through the generation of a new system. Other requirements, although uncertain, are viewed as static constraints to this single or multiple objective optimization problem. With either of these formulations, enabling tradeoffs between the requirements, objectives, or combinations thereof is a slow, serial process that becomes increasingly complex as more criteria are added. This research proposal outlines a technique that attempts to address these and other idiosyncrasies associated with modern aerospace systems design. The proposed formulation first recasts systems design into a multiple criteria decision making problem. The now multiple objectives are decomposed to discover the critical characteristics of the objective space. Tradeoffs between the objectives are considered amongst these critical characteristics by comparison to a probabilistic ideal tradeoff solution. The proposed formulation represents a radical departure from traditional methods. A pitfall of this technique is in the validation of the solution: in a multi-objective sense, how can a decision maker justify a choice between non-dominated alternatives? A series of examples help the reader to observe how this technique can be applied to aerospace systems design and compare the results of this so-called Decomposition-Based Decision Making to more traditional design approaches.
A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis
NASA Technical Reports Server (NTRS)
Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
Automated detection of microaneurysms using robust blob descriptors
NASA Astrophysics Data System (ADS)
Adal, K.; Ali, S.; Sidibé, D.; Karnowski, T.; Chaum, E.; Mériaudeau, F.
2013-03-01
Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
NASA Astrophysics Data System (ADS)
Cafiero, M.; Lloberas-Valls, O.; Cante, J.; Oliver, J.
2016-04-01
A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.
Fernandez, D.P.; Neff, J.C.; Belnap, J.; Reynolds, R.L.
2006-01-01
Decomposition is central to understanding ecosystem carbon exchange and nutrient-release processes. Unlike mesic ecosystems, which have been extensively studied, xeric landscapes have received little attention; as a result, abiotic soil-respiration regulatory processes are poorly understood in xeric environments. To provide a more complete and quantitative understanding about how abiotic factors influence soil respiration in xeric ecosystems, we conducted soil- respiration and decomposition-cloth measurements in the cold desert of southeast Utah. Our study evaluated when and to what extent soil texture, moisture, temperature, organic carbon, and nitrogen influence soil respiration and examined whether the inverse-texture hypothesis applies to decomposition. Within our study site, the effect of texture on moisture, as described by the inverse texture hypothesis, was evident, but its effect on decomposition was not. Our results show temperature and moisture to be the dominant abiotic controls of soil respiration. Specifically, temporal offsets in temperature and moisture conditions appear to have a strong control on soil respiration, with the highest fluxes occurring in spring when temperature and moisture were favorable. These temporal offsets resulted in decomposition rates that were controlled by soil moisture and temperature thresholds. The highest fluxes of CO2 occurred when soil temperature was between 10 and 16??C and volumetric soil moisture was greater than 10%. Decomposition-cloth results, which integrate decomposition processes across several months, support the soil-respiration results and further illustrate the seasonal patterns of high respiration rates during spring and low rates during summer and fall. Results from this study suggest that the parameters used to predict soil respiration in mesic ecosystems likely do not apply in cold-desert environments. ?? Springer 2006.
A test of the hierarchical model of litter decomposition.
Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H
2017-12-01
Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.
ERIC Educational Resources Information Center
Wiederholt, Erwin
1983-01-01
DTA is a technique in which the temperature difference between sample/reference is measured as a function of temperature, while both are subject to a controlled temperature program. Use of a simple DTA-apparatus in demonstrating catalytic effects of manganese dioxide and aluminum oxide on decomposition temperature of potassium chlorate is…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darbar, Devendrasinh; Department of Mechanical Engineering, National University of Singapore, 117576; Department of Physics, National University of Singapore, 117542
2016-01-15
Highlights: • MgCo{sub 2}O{sub 4} was prepared by oxalate decomposition method and electrospinning technique. • Electrospun MgCo{sub 2}O{sub 4} shows the reversible capacity of 795 and 227 mAh g{sup −1} oxalate decomposition MgCo{sub 2}O{sub 4} after 50 cycle. • Electrospun MgCo{sub 2}O{sub 4} show good cycling stability and electrochemical performance. - Abstract: Magnesium cobalt oxide, MgCo{sub 2}O{sub 4} was synthesized by oxalate decomposition method and electrospinning technique. The electrochemical performances, structures, phase formation and morphology of MgCo{sub 2}O{sub 4} synthesized by both the methods are compared. Scanning electron microscope (SEM) studies show spherical and fiber type morphology, respectively for themore » oxalate decomposition and electrospinning method. The electrospun nanofibers of MgCo{sub 2}O{sub 4} calcined at 650 °C, showed a very good reversible capacity of 795 mAh g{sup −1} after 50 cycles when compared to bulk material capacity of 227 mAh g{sup −1} at current rate of 60 mA g{sup −1}. MgCo{sub 2}O{sub 4} nanofiber showed a reversible capacity of 411 mAh g{sup −1} (at cycle) at current density of 240 mA g{sup −1}. Improved performance was due to improved conductivity of MgO, which may act as buffer layer leading to improved cycling stability. The cyclic voltammetry studies at scan rate of 0.058 mV/s show main cathodic at around 1.0 V and anodic peaks at 2.1 V vs. Li.« less
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.
A Three-way Decomposition of a Total Effect into Direct, Indirect, and Interactive Effects
VanderWeele, Tyler J.
2013-01-01
Recent theory in causal inference has provided concepts for mediation analysis and effect decomposition that allow one to decompose a total effect into a direct and an indirect effect. Here, it is shown that what is often taken as an indirect effect can in fact be further decomposed into a “pure” indirect effect and a mediated interactive effect, thus yielding a three-way decomposition of a total effect (direct, indirect, and interactive). This three-way decomposition applies to difference scales and also to additive ratio scales and additive hazard scales. Assumptions needed for the identification of each of these three effects are discussed and simple formulae are given for each when regression models allowing for interaction are used. The three-way decomposition is illustrated by examples from genetic and perinatal epidemiology, and discussion is given to what is gained over the traditional two-way decomposition into simply a direct and an indirect effect. PMID:23354283
Card, Allison; Cross, Peter; Moffatt, Colin; Simmons, Tal
2015-07-01
Twenty Sus scrofa carcasses were used to study the effect the presence of clothing had on decomposition rate and colonization locations of Diptera species; 10 unclothed control carcasses were compared to 10 clothed experimental carcasses over 58 days. Data collection occurred at regular accumulated degree day intervals; the level of decomposition as Total Body Score (TBSsurf ), pattern of decomposition, and Diptera present was documented. Results indicated a statistically significant difference in the rate of decomposition, (t427 = 2.59, p = 0.010), with unclothed carcasses decomposing faster than clothed carcasses. However, the overall decomposition rates from each carcass group are too similar to separate when applying a 95% CI, which means that, although statistically significant, from a practical forensic point of view they are not sufficiently dissimilar as to warrant the application of different formulae to estimate the postmortem interval. Further results demonstrated clothing provided blow flies with additional colonization locations. © 2015 American Academy of Forensic Sciences.
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Domain Decomposition By the Advancing-Partition Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Parallel processing for pitch splitting decomposition
NASA Astrophysics Data System (ADS)
Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris
2009-10-01
Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.
NASA Astrophysics Data System (ADS)
Neuer, Marcus J.
2013-11-01
A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
Segmentation of ECG from Surface EMG Using DWT and EMD: A Comparison Study
NASA Astrophysics Data System (ADS)
Shahbakhti, Mohammad; Heydari, Elnaz; Luu, Gia Thien
2014-10-01
The electrocardiographic (ECG) signal is a major artifact during recording the surface electromyography (SEMG). Removal of this artifact is one of the important tasks before SEMG analysis for biomedical goals. In this paper, the application of discrete wavelet transform (DWT) and empirical mode decomposition (EMD) for elimination of ECG artifact from SEMG is investigated. The focus of this research is to reach the optimized number of decomposed levels using mean power frequency (MPF) by both techniques. In order to implement the proposed methods, ten simulated and three real ECG contaminated SEMG signals have been tested. Signal-to-noise ratio (SNR) and mean square error (MSE) between the filtered and the pure signals are applied as the performance indexes of this research. The obtained results suggest both techniques could remove ECG artifact from SEMG signals fair enough, however, DWT performs much better and faster in real data.
NASA Astrophysics Data System (ADS)
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
Sladkevich, Sergey; Dupont, Anne-Laurence; Sablier, Michel; Seghouane, Dalila; Cole, Richard B
2016-11-01
Cellulose paper degradation products forming in the "tideline" area at the wet-dry interface of pure cellulose paper were analyzed using gas chromatography-electron ionization-mass spectrometry (GC-EI-MS) and high-resolution electrospray ionization-mass spectrometry (ESI-MS, LTQ Orbitrap) techniques. Different extraction protocols were employed in order to solubilize the products of oxidative cellulose decomposition, i.e., a direct solvent extraction or a more laborious chromophore release and identification (CRI) technique aiming to reveal products responsible for paper discoloration in the tideline area. Several groups of low molecular weight compounds were identified, suggesting a complex pathway of cellulose decomposition in the tidelines formed at the cellulose-water-oxygen interface. Our findings, namely the appearance of a wide range of linear saturated carboxylic acids (from formic to nonanoic), support the oxidative autocatalytic mechanism of decomposition. In addition, the identification of several furanic compounds (which can be, in part, responsible for paper discoloration) plus anhydro carbohydrate derivatives sheds more light on the pathways of cellulose decomposition. Most notably, the mechanisms of tideline formation in the presence of molecular oxygen appear surprisingly similar to pathways of pyrolytic cellulose degradation. More complex chromophore compounds were not detected in this study, thereby revealing a difference between this short-term tideline experiment and longer-term cellulose aging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, W.D.; Keyes, D.E.
1988-03-01
The authors discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. They show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing architectures.
Intelligent transportation systems data compression using wavelet decomposition technique.
DOT National Transportation Integrated Search
2009-12-01
Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....
Actuation for simultaneous motions and constraining efforts: an open chain example
NASA Astrophysics Data System (ADS)
Perreira, N. Duke
1997-06-01
A brief discussion on systems where simultaneous control of forces and velocities are desirable is given and an example linkage with revolute and prismatic joint is selected for further analysis. The Newton-Euler approach for dynamic system analysis is applied to the example to provide a basis of comparison. Gauge invariant transformations are used to convert the dynamic equations into invariant form suitable for use in a new dynamic system analysis method known as the motion-effort approach. This approach uses constraint elimination techniques based on singular value decompositions to recast the invariant form of dynamic system equations into orthogonal sets of motion and effort equations. Desired motions and constraining efforts are partitioned into ideally obtainable and unobtainable portions which are then used to determine the required actuation. The method is applied to the example system and an analytic estimate to its success is made.
Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A
2006-01-01
Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151
NASA Astrophysics Data System (ADS)
Trugman, Daniel T.; Shearer, Peter M.
2017-04-01
Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.
Educational Outcomes and Socioeconomic Status: A Decomposition Analysis for Middle-Income Countries
ERIC Educational Resources Information Center
Nieto, Sandra; Ramos, Raúl
2015-01-01
This article analyzes the factors that explain the gap in educational outcomes between the top and bottom quartile of students in different countries, according to their socioeconomic status. To do so, it uses PISA microdata for 10 middle-income and 2 high-income countries, and applies the Oaxaca-Blinder decomposition method. Its results show that…
Rostami, Javad; Chen, Jingming; Tse, Peter W.
2017-01-01
Ultrasonic guided waves have been extensively applied for non-destructive testing of plate-like structures particularly pipes in past two decades. In this regard, if a structure has a simple geometry, obtained guided waves’ signals are easy to explain. However, any small degree of complexity in the geometry such as contacting with other materials may cause an extra amount of complication in the interpretation of guided wave signals. The problem deepens if defects have irregular shapes such as natural corrosion. Signal processing techniques that have been proposed for guided wave signals’ analysis are generally good for simple signals obtained in a highly controlled experimental environment. In fact, guided wave signals in a real situation such as the existence of natural corrosion in wall-covered pipes are much more complicated. Considering pipes in residential buildings that pass through concrete walls, in this paper we introduced Smooth Empirical Mode Decomposition (SEMD) to efficiently separate overlapped guided waves. As empirical mode decomposition (EMD) which is a good candidate for analyzing non-stationary signals, suffers from some shortcomings, wavelet transform was adopted in the sifting stage of EMD to improve its outcome in SEMD. However, selection of mother wavelet that suits best for our purpose plays an important role. Since in guided wave inspection, the incident waves are well known and are usually tone-burst signals, we tailored a complex tone-burst signal to be used as our mother wavelet. In the sifting stage of EMD, wavelet de-noising was applied to eliminate unwanted frequency components from each IMF. SEMD greatly enhances the performance of EMD in guided wave analysis for highly contaminated signals. In our experiment on concrete covered pipes with natural corrosion, this method not only separates the concrete wall indication clearly in time domain signal, a natural corrosion with complex geometry that was hidden and located inside the concrete section was successfully exposed. PMID:28178220
Analysis and visualization of single-trial event-related potentials
NASA Technical Reports Server (NTRS)
Jung, T. P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T. J.
2001-01-01
In this study, a linear decomposition technique, independent component analysis (ICA), is applied to single-trial multichannel EEG data from event-related potential (ERP) experiments. Spatial filters derived by ICA blindly separate the input data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. Both the data and their decomposition are displayed using a new visualization tool, the "ERP image," that can clearly characterize single-trial variations in the amplitudes and latencies of evoked responses, particularly when sorted by a relevant behavioral or physiological variable. These tools were used to analyze data from a visual selective attention experiment on 28 control subjects plus 22 neurological patients whose EEG records were heavily contaminated with blink and other eye-movement artifacts. Results show that ICA can separate artifactual, stimulus-locked, response-locked, and non-event-related background EEG activities into separate components, a taxonomy not obtained from conventional signal averaging approaches. This method allows: (1) removal of pervasive artifacts of all types from single-trial EEG records, (2) identification and segregation of stimulus- and response-locked EEG components, (3) examination of differences in single-trial responses, and (4) separation of temporally distinct but spatially overlapping EEG oscillatory activities with distinct relationships to task events. The proposed methods also allow the interaction between ERPs and the ongoing EEG to be investigated directly. We studied the between-subject component stability of ICA decomposition of single-trial EEG epochs by clustering components with similar scalp maps and activation power spectra. Components accounting for blinks, eye movements, temporal muscle activity, event-related potentials, and event-modulated alpha activities were largely replicated across subjects. Applying ICA and ERP image visualization to the analysis of sets of single trials from event-related EEG (or MEG) experiments can increase the information available from ERP (or ERF) data. Copyright 2001 Wiley-Liss, Inc.
Dabbs, Gretchen R
2010-10-10
An increasing number of anthropological decomposition studies are utilizing accumulated degree days (ADD) to quantify and estimate the post-mortem interval (PMI) at given decompositional stages, or the number of ADD required for certain events, such as tooth exfoliation, to occur. This study addresses the utility of retroactively applying temperature data from the closest National Weather Service (NWS) station to these calculations as prescribed in the past. Hourly temperature readings were collected for 154 days at a research site in Farmington, AR between June 30 and December 25, 2008. These were converted to average daily temperatures by calculating the mean of the 24 hourly values, following the NWS reporting procedure. These data were compared to comparable data from the Owl Creek and Drake Field NWS stations, the two closest to the research site, located 5.7 and 9.9km away, respectively. Paired samples t-tests between the research site and each of the NWS stations show significant differences between the average daily temperature data collected at the research station, and both Owl Creek (2.0°C, p<0.001) and Drake Field (0.6°C, p<0.001). When applied to a simulated recovery effort, the further NWS station also proved to represent the better model for the recovery site. Using a published equation for estimating post-mortem interval using ADD and total body decomposition scores (Megyesi et al., 2005 [1]), the Drake Field data produced estimates of PMI more closely mirroring those of the research site than did Owl Creek. This demonstrates that instead of automatically choosing the nearest NWS station, care must be taken when choosing an NWS station for retroactively gathering temperature data for application of PMI estimation techniques using accumulated degree days to ensure the station adequately reflects temperature conditions at the recovery site. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Rostami, Javad; Chen, Jingming; Tse, Peter W
2017-02-07
Ultrasonic guided waves have been extensively applied for non-destructive testing of plate-like structures particularly pipes in past two decades. In this regard, if a structure has a simple geometry, obtained guided waves' signals are easy to explain. However, any small degree of complexity in the geometry such as contacting with other materials may cause an extra amount of complication in the interpretation of guided wave signals. The problem deepens if defects have irregular shapes such as natural corrosion. Signal processing techniques that have been proposed for guided wave signals' analysis are generally good for simple signals obtained in a highly controlled experimental environment. In fact, guided wave signals in a real situation such as the existence of natural corrosion in wall-covered pipes are much more complicated. Considering pipes in residential buildings that pass through concrete walls, in this paper we introduced Smooth Empirical Mode Decomposition (SEMD) to efficiently separate overlapped guided waves. As empirical mode decomposition (EMD) which is a good candidate for analyzing non-stationary signals, suffers from some shortcomings, wavelet transform was adopted in the sifting stage of EMD to improve its outcome in SEMD. However, selection of mother wavelet that suits best for our purpose plays an important role. Since in guided wave inspection, the incident waves are well known and are usually tone-burst signals, we tailored a complex tone-burst signal to be used as our mother wavelet. In the sifting stage of EMD, wavelet de-noising was applied to eliminate unwanted frequency components from each IMF. SEMD greatly enhances the performance of EMD in guided wave analysis for highly contaminated signals. In our experiment on concrete covered pipes with natural corrosion, this method not only separates the concrete wall indication clearly in time domain signal, a natural corrosion with complex geometry that was hidden and located inside the concrete section was successfully exposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westraadt, J.E., E-mail: johan.westraadt@nmmu.ac.za; Olivier, E.J.; Neethling, J.H.
2015-11-15
Spinodal decomposition (SD) is an important phenomenon in materials science and engineering. For example, it is considered to be responsible for the 475 °C embrittlement of stainless steels comprising the bcc (ferrite) or bct (martensite) phases. Structural characterization of the evolving minute nano-scale concentration fluctuations during SD in the Fe–Cr system is, however, a notable challenge, and has mainly been considered accessible via atom probe tomography (APT) and small-angle neutron scattering. The standard tool for nanostructure characterization, viz. transmission electron microscopy (TEM), has only been successfully applied to late stages of SD when embrittlement is already severe. However, we heremore » demonstrate that the structural evolution in the early stages of SD in binary Fe–Cr, and alloys based on the binary, are accessible via analytical scanning TEM. An Fe–36 wt% Cr alloy aged at 500 °C for 1, 10 and 100 h is investigated using an aberration-corrected microscope and it is found that highly coherent and interconnected Cr-rich regions develop. The wavelength of decomposition is rather insensitive to the sample thickness and it is quantified to 2, 3 and 6 nm after ageing for 1, 10 and 100 h, which is in reasonable agreement with prior APT analysis. The concentration amplitude is more sensitive to the sample thickness and acquisition parameters but the TEM analysis is in good agreement with APT analysis for the longest ageing time. These findings open up for combinatorial TEM studies where both local crystallography and chemistry is required. - Highlights: • STEM-EELS analysis was successfully applied to resolve early stage SD in Fe–Cr. • Compositional wavelength measured with STEM-EELS compares well to previous ATP studies. • Compositional amplitude measured with STEM-EELS is a function of experimental parameters. • STEM-EELS allows for combinatorial studies of SD using complementary techniques.« less
Electroencephalographic compression based on modulated filter banks and wavelet transform.
Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando
2011-01-01
Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.
Asymmetric latent semantic indexing for gene expression experiments visualization.
González, Javier; Muñoz, Alberto; Martos, Gabriel
2016-08-01
We propose a new method to visualize gene expression experiments inspired by the latent semantic indexing technique originally proposed in the textual analysis context. By using the correspondence word-gene document-experiment, we define an asymmetric similarity measure of association for genes that accounts for potential hierarchies in the data, the key to obtain meaningful gene mappings. We use the polar decomposition to obtain the sources of asymmetry of the similarity matrix, which are later combined with previous knowledge. Genetic classes of genes are identified by means of a mixture model applied in the genes latent space. We describe the steps of the procedure and we show its utility in the Human Cancer dataset.
A multiscale decomposition approach to detect abnormal vasculature in the optic disc.
Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S; Nemeth, Sheila; Barriga, Simon; Soliz, Peter
2015-07-01
This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. Copyright © 2015 Elsevier Ltd. All rights reserved.
Perrin, Stephane; Baranski, Maciej; Froehly, Luc; Albero, Jorge; Passilly, Nicolas; Gorecki, Christophe
2015-11-01
We report a simple method, based on intensity measurements, for the characterization of the wavefront and aberrations produced by micro-optical focusing elements. This method employs the setup presented earlier in [Opt. Express 22, 13202 (2014)] for measurements of the 3D point spread function, on which a basic phase-retrieval algorithm is applied. This combination allows for retrieval of the wavefront generated by the micro-optical element and, in addition, quantification of the optical aberrations through the wavefront decomposition with Zernike polynomials. The optical setup requires only an in-motion imaging system. The technique, adapted for the optimization of micro-optical component fabrication, is demonstrated by characterizing a planoconvex microlens.
A new method for QRS detection in ECG signals using QRS-preserving filtering techniques.
Sharma, Tanushree; Sharma, Kamalesh K
2018-03-28
Detection of QRS complexes in ECG signals is required for various purposes such as determination of heart rate, feature extraction and classification. The problem of automatic QRS detection in ECG signals is complicated by the presence of noise spectrally overlapping with the QRS frequency range. As a solution to this problem, we propose the use of least-squares-optimisation-based smoothing techniques that suppress the noise peaks in the ECG while preserving the QRS complexes. We also propose a novel nonlinear transformation technique that is applied after the smoothing operations, which equalises the QRS amplitudes without boosting the supressed noise peaks. After these preprocessing operations, the R-peaks can finally be detected with high accuracy. The proposed technique has a low computational load and, therefore, it can be used for real-time QRS detection in a wearable device such as a Holter monitor or for fast offline QRS detection. The offline and real-time versions of the proposed technique have been evaluated on the standard MIT-BIH database. The offline implementation is found to perform better than state-of-the-art techniques based on wavelet transforms, empirical mode decomposition, etc. and the real-time implementation also shows improved performance over existing real-time QRS detection techniques.
NASA Astrophysics Data System (ADS)
Huang, Yan; Wang, Zhihui
2015-12-01
With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.
NASA Astrophysics Data System (ADS)
Debnath, M.; Santoni, C.; Leonardi, S.; Iungo, G. V.
2017-03-01
The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator. This article is part of the themed issue 'Wind energy in complex terrains'.
Ge, Ni-Na; Wei, Yong-Kai; Zhao, Feng; Chen, Xiang-Rong; Ji, Guang-Fu
2014-07-01
The electronic structure and initial decomposition in high explosive HMX under conditions of shock loading are examined. The simulation is performed using quantum molecular dynamics in conjunction with multi-scale shock technique (MSST). A self-consistent charge density-functional tight-binding (SCC-DFTB) method is adapted. The results show that the N-N-C angle has a drastic change under shock wave compression along lattice vector b at shock velocity 11 km/s, which is the main reason that leads to an insulator-to-metal transition for the HMX system. The metallization pressure (about 130 GPa) of condensed-phase HMX is predicted firstly. We also detect the formation of several key products of condensed-phase HMX decomposition, such as NO2, NO, N2, N2O, H2O, CO, and CO2, and all of them have been observed in previous experimental studies. Moreover, the initial decomposition products include H2 due to the C-H bond breaking as a primary reaction pathway at extreme condition, which presents a new insight into the initial decomposition mechanism of HMX under shock loading at the atomistic level.
Analysis on Vertical Scattering Signatures in Forestry with PolInSAR
NASA Astrophysics Data System (ADS)
Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen
2014-11-01
We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.
Isothermal Decomposition of Hydrogen Peroxide Dihydrate
NASA Technical Reports Server (NTRS)
Loeffler, M. J.; Baragiola, R. A.
2011-01-01
We present a new method of growing pure solid hydrogen peroxide in an ultra high vacuum environment and apply it to determine thermal stability of the dihydrate compound that forms when water and hydrogen peroxide are mixed at low temperatures. Using infrared spectroscopy and thermogravimetric analysis, we quantified the isothermal decomposition of the metastable dihydrate at 151.6 K. This decomposition occurs by fractional distillation through the preferential sublimation of water, which leads to the formation of pure hydrogen peroxide. The results imply that in an astronomical environment where condensed mixtures of H2O2 and H2O are shielded from radiolytic decomposition and warmed to temperatures where sublimation is significant, highly concentrated or even pure hydrogen peroxide may form.
Small-scale thermal studies of volatile homemade explosives
Sandstrom, Mary M.; Brown, Geoffrey W.; Warner, Kirsten F.; ...
2016-01-26
Several homemade or improvised explosive mixtures that either contained volatile components or produced volatile products were examined using standard small-scale safety and thermal (SSST) testing that employed differential scanning calorimetry (DSC) techniques (constant heating rate and standard sample holders). KClO 3 and KClO 4 mixtures with dodecane exhibited different enthalpy behavior when using a vented sample holder in contrast to a sealed sample holder. The standard configuration produced profiles that exhibited only endothermic transitions. The sealed system produced profiles that exhibited additional exothermic transitions absent in the standard configuration produced profiles. When H 2O 2/fuel mixtures were examined, the volatilizationmore » of the peroxide (endothermic) dominated the profiles. When a sealed sample holder was used, the energetic releases of the mixture could be clearly observed. For AN and AN mixtures, the high temperature decomposition appears as an intense endothermic event. Using a nominally sealed sample holder also did not adequately contain the system. Only when a high-pressure rated sample holder was used the high temperature decomposition of the AN could be detected as an exothermic release. The testing was conducted during a proficiency (or round-robin type) test that included three U.S. Department of Energy and two U.S. Department of Defense laboratories. In the course of this proficiency test, certain HMEs exhibited thermal behavior that was not adequately accounted for by standard techniques. Further examination of this atypical behavior highlighted issues that may have not been recognized previously because some of these materials are not routinely tested. More importantly, if not recognized, the SSST testing results could lead to inaccurate safety assessments. Furthermore, this study provides examples, where standard techniques can be applied, and results can be obtained, but these results may be misleading in establishing thermal properties.« less
Ananth, D V N; Nagesh Kumar, G V
2016-05-01
With increase in electric power demand, transmission lines were forced to operate close to its full load and due to the drastic change in weather conditions, thermal limit is increasing and the system is operating with less security margin. To meet the increased power demand, a doubly fed induction generator (DFIG) based wind generation system is a better alternative. For improving power flow capability and increasing security STATCOM can be adopted. As per modern grid rules, DFIG needs to operate without losing synchronism called low voltage ride through (LVRT) during severe grid faults. Hence, an enhanced field oriented control technique (EFOC) was adopted in Rotor Side Converter of DFIG converter to improve power flow transfer and to improve dynamic and transient stability. A STATCOM is coordinated to the system for obtaining much better stability and enhanced operation during grid fault. For the EFOC technique, rotor flux reference changes its value from synchronous speed to zero during fault for injecting current at the rotor slip frequency. In this process DC-Offset component of flux is controlled, decomposition during symmetric and asymmetric faults. The offset decomposition of flux will be oscillatory in a conventional field oriented control, whereas in EFOC it was aimed to damp quickly. This paper mitigates voltage and limits surge currents to enhance the operation of DFIG during symmetrical and asymmetrical faults. The system performance with different types of faults like single line to ground, double line to ground and triple line to ground was applied and compared without and with a STATCOM occurring at the point of common coupling with fault resistance of a very small value at 0.001Ω. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Information Visualization Techniques for Effective Cross-Discipline Communication
NASA Astrophysics Data System (ADS)
Fisher, Ward
2013-04-01
Collaboration between research groups in different fields is a common occurrence, but it can often be frustrating due to the absence of a common vocabulary. This lack of a shared context can make expressing important concepts and discussing results difficult. This problem may be further exacerbated when communicating to an audience of laypeople. Without a clear frame of reference, simple concepts are often rendered difficult-to-understand at best, and unintelligible at worst. An easy way to alleviate this confusion is with the use of clear, well-designed visualizations to illustrate an idea, process or conclusion. There exist a number of well-described machine-learning and statistical techniques which can be used to illuminate the information present within complex high-dimensional datasets. Once the information has been separated from the data, clear communication becomes a matter of selecting an appropriate visualization. Ideally, the visualization is information-rich but data-scarce. Anything from a simple bar chart, to a line chart with confidence intervals, to an animated set of 3D point-clouds can be used to render a complex idea as an easily understood image. Several case studies will be presented in this work. In the first study, we will examine how a complex statistical analysis was applied to a high-dimensional dataset, and how the results were succinctly communicated to an audience of microbiologists and chemical engineers. Next, we will examine a technique used to illustrate the concept of the singular value decomposition, as used in the field of computer vision, to a lay audience of undergraduate students from mixed majors. We will then examine a case where a simple animated line plot was used to communicate an approach to signal decomposition, and will finish with a discussion of the tools available to create these visualizations.
Modal identification of structures by a novel approach based on FDD-wavelet method
NASA Astrophysics Data System (ADS)
Tarinejad, Reza; Damadipour, Majid
2014-02-01
An important application of system identification in structural dynamics is the determination of natural frequencies, mode shapes and damping ratios during operation which can then be used for calibrating numerical models. In this paper, the combination of two advanced methods of Operational Modal Analysis (OMA) called Frequency Domain Decomposition (FDD) and Continuous Wavelet Transform (CWT) based on novel cyclic averaging of correlation functions (CACF) technique are used for identification of dynamic properties. By using this technique, the autocorrelation of averaged correlation functions is used instead of original signals. Integration of FDD and CWT methods is used to overcome their deficiency and take advantage of the unique capabilities of these methods. The FDD method is able to accurately estimate the natural frequencies and mode shapes of structures in the frequency domain. On the other hand, the CWT method is in the time-frequency domain for decomposition of a signal at different frequencies and determines the damping coefficients. In this paper, a new formulation applied to the wavelet transform of the averaged correlation function of an ambient response is proposed. This application causes to accurate estimation of damping ratios from weak (noise) or strong (earthquake) vibrations and long or short duration record. For this purpose, the modified Morlet wavelet having two free parameters is used. The optimum values of these two parameters are obtained by employing a technique which minimizes the entropy of the wavelet coefficients matrix. The capabilities of the novel FDD-Wavelet method in the system identification of various dynamic systems with regular or irregular distribution of mass and stiffness are illustrated. This combined approach is superior to classic methods and yields results that agree well with the exact solutions of the numerical models.
Empirical Mode Decomposition of Geophysical Well-log Data of Bombay Offshore Basin, Mumbai, India
NASA Astrophysics Data System (ADS)
Siddharth Gairola, Gaurav; Chandrasekhar, Enamundram
2016-04-01
Geophysical well-log data manifest the nonlinear behaviour of their respective physical properties of the heterogeneous subsurface layers as a function of depth. Therefore, nonlinear data analysis techniques must be implemented, to quantify the degree of heterogeneity in the subsurface lithologies. One such nonlinear data adaptive technique is empirical mode decomposition (EMD) technique, which facilitates to decompose the data into oscillatory signals of different wavelengths called intrinsic mode functions (IMF). In the present study EMD has been applied to gamma-ray log and neutron porosity log of two different wells: Well B and Well C located in the western offshore basin of India to perform heterogeneity analysis and compare the results with those obtained by multifractal studies of the same data sets. By establishing a relationship between the IMF number (m) and the mean wavelength associated with each IMF (Im), a heterogeneity index (ρ) associated with subsurface layers can be determined using the relation, Im=kρm, where 'k' is a constant. The ρ values bear an inverse relation with the heterogeneity of the subsurface: smaller ρ values designate higher heterogeneity and vice-versa. The ρ values estimated for different limestone payzones identified in the wells clearly show that Well C has higher degree of heterogeneity than Well B. This correlates well with the estimated Vshale values for the limestone reservoir zone showing higher shale content in Well C than Well B. The ρ values determined for different payzones of both wells will be used to quantify the degree of heterogeneity in different wells. The multifractal behaviour of each IMF of both the logs of both the wells will be compared with one another and discussed on the lines of their heterogeneity indices.
NASA Technical Reports Server (NTRS)
Huff, Timothy L.
2002-01-01
Thermogravimetric analysis (TGA) is widely employed in the thermal characterization of non-metallic materials, yielding valuable information on decomposition characteristics of a sample over a wide temperature range. However, a potential wealth of chemical information is lost during the process, with the evolving gases generated during thermal decomposition escaping through the exhaust line. Fourier Transform-Infrared spectroscopy (FT-IR) is a powerful analytical technique for determining many chemical constituents while in any material state, in this application, the gas phase. By linking these two techniques, evolving gases generated during the TGA process are directed into an appropriately equipped infrared spectrometer for chemical speciation. Consequently, both thermal decomposition and chemical characterization of a material may be obtained in a single sample run. In practice, a heated transfer line is employed to connect the two instruments while a purge gas stream directs the evolving gases into the FT-IR. The purge gas can be either high purity air or an inert gas such as nitrogen to allow oxidative and pyrolytic processes to be examined, respectively. The FT-IR data is collected realtime, allowing continuous monitoring of chemical compositional changes over the course of thermal decomposition. Using this coupled technique, an array of diverse materials has been examined, including composites, plastics, rubber, fiberglass epoxy resins, polycarbonates, silicones, lubricants and fluorocarbon materials. The benefit of combining these two methodologies is of particular importance in the aerospace community, where newly developing materials have little available data with which to refer. By providing both thermal and chemical data simultaneously, a more definitive and comprehensive characterization of the material is possible. Additionally, this procedure has been found to be a viable screening technique for certain materials, with the generated data useful in the selection of other appropriate analytical procedures for further material characterization.
A general solution strategy of modified power method for higher mode solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr
2016-01-15
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less
NASA Astrophysics Data System (ADS)
Trugman, Daniel Taylor
The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of southern California seismicity. Chapter 6 builds upon these results and applies the same spectral decomposition technique to examine the source properties of several thousand recent earthquakes in southern Kansas that are likely human-induced by massive oil and gas operations in the region. Chapter 7 studies the connection between source spectral properties and earthquake hazard, focusing on spatial variations in dynamic stress drop and its influence on ground motion amplitudes. Finally, Chapter 8 provides a summary of the key findings of and relations between these studies, and outlines potential avenues of future research.
The wider determinants of inequalities in health: a decomposition analysis
2011-01-01
Background The common starting point of many studies scrutinizing the factors underlying health inequalities is that material, cultural-behavioural, and psycho-social factors affect the distribution of health systematically through income, education, occupation, wealth or similar indicators of socioeconomic structure. However, little is known regarding if and to what extent these factors can assert systematic influence on the distribution of health of a population independent of the effects channelled through income, education, or wealth. Methods Using representative data from the German Socioeconomic Panel, we apply Fields' regression based decomposition techniques to decompose variations in health into its sources. Controlling for income, education, occupation, and wealth, we assess the relative importance of the explanatory factors over and above their effect on the variation in health channelled through the commonly applied measures of socioeconomic status. Results The analysis suggests that three main factors persistently contribute to variance in health: the capability score, cultural-behavioural variables and to a lower extent, the materialist approach. Of the three, the capability score illustrates the explanatory power of interaction and compound effects as it captures the individual's socioeconomic, social, and psychological resources in relation to his/her exposure to life challenges. Conclusion Models that take a reductionist perspective and do not allow for the possibility that health inequalities are generated by factors over and above their effect on the variation in health channelled through one of the socioeconomic measures are underspecified and may fail to capture the determinants of health inequalities. PMID:21791075
Tailored multivariate analysis for modulated enhanced diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni
2015-10-21
Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. The multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). When applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. To develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less
The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations
Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka
2011-01-01
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
NASA Astrophysics Data System (ADS)
Liu, Peng; Wang, Yanfei
2018-04-01
We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.
Becky A. Ball; Mark D. Hunter; John S. Kominoski; Christopher M. Swan; Mark A. Bradford
2008-01-01
Although litter decomposition is a fundamental ecological process, most of our understanding comes from studies of single-species decay. Recently, litter-mixing studies have tested whether monoculture data can be applied to mixed-litter systems. These studies have mainly attempted to detect non-additive effects of litter mixing, which address potential consequences of...
Application of composite dictionary multi-atom matching in gear fault diagnosis.
Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng
2011-01-01
The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.
Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James
2003-01-01
Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112
Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Horne, William C.
2015-01-01
An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.
Synthesis and Characterization of Cholesterol Nano Particles by Using w/o Microemulsion Technique
NASA Astrophysics Data System (ADS)
Vyas, Poorvesh M.; Vasant, Sonal R.; Hajiyani, Rakesh R.; Joshi, Mihir J.
2010-10-01
Cholesterol is one of the most abundant and well known steroids in the animal kingdom. Cholesterol rich micro-emulsions and nano-emulsions are useful for the treatment of breast cancer and gynecologic cancers. The nano particles of cholesterol and other pharmaceutically important materials have been reported. In the present investigation, the nano particles of cholesterol were synthesized by direct precipitation technique using triton X-100/water/n-butanol micro-emulsion. The average particle size of cholesterol nano particles was estimated by applying Scherrer's formula to the powder X-ray diffraction pattern, which was found to be 22 nm. The nanoparticles of cholesterol were observed by using TEM and the particle size was found within the range from 15 nm-31 nm. The distribution of particle size was studied through DLS. The nanoparticles of cholesterol were characterized by using FT-IR spectroscopy and the force constant was also calculated for O-H, C-H and C-O bonds. The thermal response of nanoparticles of cholesterol was studied by TGA, which showed that the nanoparticles were stable up to 200 °C and then decomposed. Kinetic and thermodynamic parameters of decomposition process were also calculated by applying Coats and Redfern formula to thermo-gram.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
Real-time simulation of biological soft tissues: a PGD approach.
Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F
2013-05-01
We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.
Decomposition of the Inequality of Income Distribution by Income Types—Application for Romania
NASA Astrophysics Data System (ADS)
Andrei, Tudorel; Oancea, Bogdan; Richmond, Peter; Dhesi, Gurjeet; Herteliu, Claudiu
2017-09-01
This paper identifies the salient factors that characterize the inequality income distribution for Romania. Data analysis is rigorously carried out using sophisticated techniques borrowed from classical statistics (Theil). Decomposition of the inequalities measured by the Theil index is also performed. This study relies on an exhaustive (11.1 million records for 2014) data-set for total personal gross income of Romanian citizens.
ERIC Educational Resources Information Center
Humphreys, Patrick; Wisudha, Ayleen
As a demonstration of the application of heuristic devices to decision-theoretical techniques, an interactive computer program known as MAUD (Multiattribute Utility Decomposition) has been designed to support decision or choice problems that can be decomposed into component factors, or to act as a tool for investigating the microstructure of a…
NASA Astrophysics Data System (ADS)
Yehya, F.; Chaudhary, A. K.; Srinivas, D.; Muralidharan, K.
2015-11-01
We report a novel time-resolved photoacoustic-based technique for studying the thermal decomposition mechanisms of some secondary explosives such as RDX (hexahydro-1,3,5-trinitro-1,3,5-triazine), picric acid, 4,6-dinitro-5-(4-nitro-1 H-imidazol-1-yl)-1 H-benzo[ d] [1-3] triazole, and 5-chloro-1-(4-nitrophenyl)-1 H-tetrazole. A comparison of the thermal decomposition mechanisms of these secondary explosives was made by detecting NO2 molecules released under controlled pyrolysis between 25 and 350 °C. The results show excellent agreement with the thermogravimetric and differential thermal analysis (TGA-DTA) results. A specially designed PA cell made of stainless steel was filled with explosive vapor and pumped using second harmonic, i.e., λ = 532 nm, pulses of duration 7 ns at a 10 Hz repetition rate, obtained using a Q-switched Nd:YAG laser. The use of a combination of PA and TGA-DTA techniques enables the study of NO2 generation, and this method can be used to scale the performance of these explosives as rocket fuels. The minimum detection limits of the four explosives were 38 ppmv to 69 ppbv, depending on their respective vapor pressures.
On the use of the singular value decomposition for text retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husbands, P.; Simon, H.D.; Ding, C.
2000-12-04
The use of the Singular Value Decomposition (SVD) has been proposed for text retrieval in several recent works. This technique uses the SVD to project very high dimensional document and query vectors into a low dimensional space. In this new space it is hoped that the underlying structure of the collection is revealed thus enhancing retrieval performance. Theoretical results have provided some evidence for this claim and to some extent experiments have confirmed this. However, these studies have mostly used small test collections and simplified document models. In this work we investigate the use of the SVD on large documentmore » collections. We show that, if interpreted as a mechanism for representing the terms of the collection, this technique alone is insufficient for dealing with the variability in term occurrence. Section 2 introduces the text retrieval concepts necessary for our work. A short description of our experimental architecture is presented in Section 3. Section 4 describes how term occurrence variability affects the SVD and then shows how the decomposition influences retrieval performance. A possible way of improving SVD-based techniques is presented in Section 5 and concluded in Section 6.« less
Evaluation and characterization of the methane-carbon dioxide decomposition reaction
NASA Technical Reports Server (NTRS)
Davenport, R. J.; Schubert, F. H.; Shumar, J. W.; Steenson, T. S.
1975-01-01
A program was conducted to evaluate and characterize the carbon dioxide-methane (CO2-CH4) decomposition reaction, i.e., CO2 + CH4 = 2C + 2H2O. The primary objective was to determine the feasibility of applying this reaction at low temperatures as a technique for recovering the oxygen (O2) remaining in the CO2 which exits mixed with CH4 from a Sabatier CO2 reduction subsystem (as part of an air revitalization system of a manned spacecraft). A test unit was designed, fabricated, and assembled for characterizing the performance of various catalysts for the reaction and ultraviolet activation of the CH4 and CO2. The reactor included in the test unit was designed to have sufficient capacity to evaluate catalyst charges of up to 76 g (0.17 lb). The test stand contained the necessary instrumentation and controls to obtain the data required to characterize the performance of the catalysts and sensitizers tested: flow control and measurement, temperature control and measurement, product and inlet gas analysis, and pressure measurement. A product assurance program was performed implementing the concepts of quality control and safety into the program effort.
Computational Chemistry and Lubrication
NASA Technical Reports Server (NTRS)
Zehe, Michael J.
1998-01-01
Members of NASA Lewis Research Center's Tribology and Surface Science Branch are applying high-level computational chemistry techniques to the development of new lubrication systems for space applications and for future advanced aircraft engines. The next generation of gas turbine engines will require a liquid lubricant to function at temperatures in excess of 350 C in oxidizing environments. Conventional hydrocarbon-based lubricants are incapable of operating in these extreme environments, but a class of compounds known as the perfluoropolyether (PFAE) liquids (see the preceding illustration) shows promise for such applications. These commercially available products are already being used as lubricants in conditions where low vapor pressure and chemical stability are crucial, such as in satellite bearings and composite disk platters. At higher temperatures, however, these compounds undergo a decomposition process that is assisted (catalyzed) by metal and metal oxide bearing surfaces. This decomposition process severely limits the applicability of PFAE's at higher temperatures. A great deal of laboratory experimentation has revealed that the extent of fluid degradation depends on the chemical properties of the bearing surface materials. Lubrication engineers would like to understand the chemical breakdown mechanism to design a less vulnerable PFAE or to develop a chemical additive to block this degradation.
3-D discrete shearlet transform and video processing.
Negi, Pooran Singh; Labate, Demetrio
2012-06-01
In this paper, we introduce a digital implementation of the 3-D shearlet transform and illustrate its application to problems of video denoising and enhancement. The shearlet representation is a multiscale pyramid of well-localized waveforms defined at various locations and orientations, which was introduced to overcome the limitations of traditional multiscale systems in dealing with multidimensional data. While the shearlet approach shares the general philosophy of curvelets and surfacelets, it is based on a very different mathematical framework, which is derived from the theory of affine systems and uses shearing matrices rather than rotations. This allows a natural transition from the continuous setting to the digital setting and a more flexible mathematical structure. The 3-D digital shearlet transform algorithm presented in this paper consists in a cascade of a multiscale decomposition and a directional filtering stage. The filters employed in this decomposition are implemented as finite-length filters, and this ensures that the transform is local and numerically efficient. To illustrate its performance, the 3-D discrete shearlet transform is applied to problems of video denoising and enhancement, and compared against other state-of-the-art multiscale techniques, including curvelets and surfacelets.
NASA Astrophysics Data System (ADS)
Mieloszyk, M.; Opoka, S.; Ostachowicz, W.
2015-07-01
This paper presents an application of Fibre Bragg Grating (FBG) sensors for Structural Health Monitoring (SHM) of offshore wind energy support structure model. The analysed structure is a tripod equipped with 16 FBG sensors. From a wide variety of Operational Modal Analysis (OMA) methods Frequency Domain Decomposition (FDD) technique is used in this paper under assumption that the input loading is similar to a white noise excitation. The FDD method can be applied using different sets of sensors, i.e. the one which contains all FBG sensors and the other set of sensors localised only on a particular tripod's leg. The cases considered during investigation were as follows: damaged and undamaged scenarios, different support conditions. The damage was simulated as an dismantled flange on an upper brace in one of the tripod legs. First the model was fixed to an antishaker table and investigated in the air under impulse excitations. Next the tripod was submerged into water basin in order to check the quality of the measurement set-up in different environmental condition. In this case the model was excited by regular waves.
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
NASA Astrophysics Data System (ADS)
Guarnieri, Fernando L.; Tsurutani, Bruce T.; Vieira, Luis E. A.; Hajra, Rajkumar; Echer, Ezequiel; Mannucci, Anthony J.; Gonzalez, Walter D.
2018-01-01
The purpose of this study is to present a wavelet interactive filtering and reconstruction technique and apply this to the solar wind magnetic field components detected at the L1 Lagrange point ˜ 0.01 AU upstream of the Earth. These filtered interplanetary magnetic field (IMF) data are fed into a model to calculate a time series which we call AE∗. This model was adjusted assuming that magnetic reconnection associated with southward-directed IMF Bz is the main mechanism transferring energy into the magnetosphere. The calculated AE∗ was compared to the observed AE (auroral electrojet) index using cross-correlation analysis. The results show correlations as high as 0.90. Empirical removal of the high-frequency, short-wavelength Alfvénic component in the IMF by wavelet decomposition is shown to dramatically improve the correlation between AE∗ and the observed AE index. It is envisioned that this AE∗ can be used as the main input for a model to forecast relativistic electrons in the Earth's outer radiation belts, which are delayed by ˜ 1 to 2 days from intense AE events.
Applying matching pursuit decomposition time-frequency processing to UGS footstep classification
NASA Astrophysics Data System (ADS)
Larsen, Brett W.; Chung, Hugh; Dominguez, Alfonso; Sciacca, Jacob; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Allee, David R.
2013-06-01
The challenge of rapid footstep detection and classification in remote locations has long been an important area of study for defense technology and national security. Also, as the military seeks to create effective and disposable unattended ground sensors (UGS), computational complexity and power consumption have become essential considerations in the development of classification techniques. In response to these issues, a research project at the Flexible Display Center at Arizona State University (ASU) has experimented with footstep classification using the matching pursuit decomposition (MPD) time-frequency analysis method. The MPD provides a parsimonious signal representation by iteratively selecting matched signal components from a pre-determined dictionary. The resulting time-frequency representation of the decomposed signal provides distinctive features for different types of footsteps, including footsteps during walking or running activities. The MPD features were used in a Bayesian classification method to successfully distinguish between the different activities. The computational cost of the iterative MPD algorithm was reduced, without significant loss in performance, using a modified MPD with a dictionary consisting of signals matched to cadence temporal gait patterns obtained from real seismic measurements. The classification results were demonstrated with real data from footsteps under various conditions recorded using a low-cost seismic sensor.
Corrected confidence bands for functional data using principal components.
Goldsmith, J; Greven, S; Crainiceanu, C
2013-03-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.
Corrected Confidence Bands for Functional Data Using Principal Components
Goldsmith, J.; Greven, S.; Crainiceanu, C.
2014-01-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003
Mode Decomposition Methods for Soil Moisture Prediction
NASA Astrophysics Data System (ADS)
Jana, R. B.; Efendiev, Y. R.; Mohanty, B.
2014-12-01
Lack of reliable, well-distributed, long-term datasets for model validation is a bottle-neck for most exercises in soil moisture analysis and prediction. Understanding what factors drive soil hydrological processes at different scales and their variability is very critical to further our ability to model the various components of the hydrologic cycle more accurately. For this, a comprehensive dataset with measurements across scales is very necessary. Intensive fine-resolution sampling of soil moisture over extended periods of time is financially and logistically prohibitive. Installation of a few long term monitoring stations is also expensive, and needs to be situated at critical locations. The concept of Time Stable Locations has been in use for some time now to find locations that reflect the mean values for the soil moisture across the watershed under all wetness conditions. However, the soil moisture variability across the watershed is lost when measuring at only time stable locations. We present here a study using techniques such as Dynamic Mode Decomposition (DMD) and Discrete Empirical Interpolation Method (DEIM) that extends the concept of time stable locations to arrive at locations that provide not simply the average soil moisture values for the watershed, but also those that can help re-capture the dynamics across all locations in the watershed. As with the time stability, the initial analysis is dependent on an intensive sampling history. The DMD/DEIM method is an application of model reduction techniques for non-linearly related measurements. Using this technique, we are able to determine the number of sampling points that would be required for a given accuracy of prediction across the watershed, and the location of those points. Locations with higher energetics in the basis domain are chosen first. We present case studies across watersheds in the US and India. The technique can be applied to other hydro-climates easily.
Augmenting the decomposition of EMG signals using supervised feature extraction techniques.
Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S
2012-01-01
Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.
A fast new algorithm for a robot neurocontroller using inverse QR decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A.S.; Khemaissia, S.
2000-01-01
A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, E A; Hsu, P C; Springer, H K
PBXN-9, an HMX-formulation, is thermally damaged and thermally decomposed in order to determine the morphological changes and decomposition kinetics that occur in the material after mild to moderate heating. The material and its constituents were decomposed using standard thermal analysis techniques (DSC and TGA) and the decomposition kinetics are reported using different kinetic models. Pressed parts and prill were thermally damaged, i.e. heated to temperatures that resulted in material changes but did not result in significant decomposition or explosion, and analyzed. In general, the thermally damaged samples showed a significant increase in porosity and decrease in density and a smallmore » amount of weight loss. These PBXN-9 samples appear to sustain more thermal damage than similar HMX-Viton A formulations and the most likely reasons are the decomposition/evaporation of a volatile plasticizer and a polymorphic transition of the HMX from {beta} to {delta} phase.« less
Feng, Wenting; Liang, Junyi; Hale, Lauren E.; ...
2017-06-09
Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon–climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming.more » Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Wenting; Liang, Junyi; Hale, Lauren E.
Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon–climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming.more » Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change.« less
Feng, Wenting; Liang, Junyi; Hale, Lauren E; Jung, Chang Gyo; Chen, Ji; Zhou, Jizhong; Xu, Minggang; Yuan, Mengting; Wu, Liyou; Bracho, Rosvel; Pegoraro, Elaine; Schuur, Edward A G; Luo, Yiqi
2017-11-01
Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon-climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming. Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change. © 2017 John Wiley & Sons Ltd.
Wind Farm Flow Modeling using an Input-Output Reduced-Order Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter
Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less
Approximation, abstraction and decomposition in search and optimization
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1992-01-01
In this paper, I discuss four different areas of my research. One portion of my research has focused on automatic synthesis of search control heuristics for constraint satisfaction problems (CSPs). I have developed techniques for automatically synthesizing two types of heuristics for CSPs: Filtering functions are used to remove portions of a search space from consideration. Another portion of my research is focused on automatic synthesis of hierarchic algorithms for solving constraint satisfaction problems (CSPs). I have developed a technique for constructing hierarchic problem solvers based on numeric interval algebra. Another portion of my research is focused on automatic decomposition of design optimization problems. We are using the design of racing yacht hulls as a testbed domain for this research. Decomposition is especially important in the design of complex physical shapes such as yacht hulls. Another portion of my research is focused on intelligent model selection in design optimization. The model selection problem results from the difficulty of using exact models to analyze the performance of candidate designs.
Preparation and catalytic activities of LaFeO3 and Fe2O3 for HMX thermal decomposition.
Wei, Zhi-Xian; Xu, Yan-Qing; Liu, Hai-Yan; Hu, Chang-Wen
2009-06-15
Perovskite-type LaFeO(3) and alpha-Fe(2)O(3) with high specific surface areas were directly prepared with appropriate stearic acid-nitrates ratios by a novel stearic acid solution combustion method. The obtained powders were characterized by XRD, FT-IR and XPS techniques. The catalytic activities of perovskite-type LaFeO(3) and alpha-Fe(2)O(3) for the thermal decomposition of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) were investigated by TG and TG-EGA techniques. The experimental results show that the catalytic activity of perovskite-type LaFeO(3) was much higher than that of alpha-Fe(2)O(3) because of higher concentration of surface-adsorbed oxygen (O(ad)) and hydroxyl of LaFeO(3). The study points out a potential way to develop new and more active perovskite-type catalysts for the HMX thermal decomposition.
Separable decompositions of bipartite mixed states
NASA Astrophysics Data System (ADS)
Li, Jun-Li; Qiao, Cong-Feng
2018-04-01
We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.
Removing non-stationary noise in spectrum sensing using matrix factorization
NASA Astrophysics Data System (ADS)
van Bloem, Jan-Willem; Schiphorst, Roel; Slump, Cornelis H.
2013-12-01
Spectrum sensing is key to many applications like dynamic spectrum access (DSA) systems or telecom regulators who need to measure utilization of frequency bands. The International Telecommunication Union (ITU) recommends a 10 dB threshold above the noise to decide whether a channel is occupied or not. However, radio frequency (RF) receiver front-ends are non-ideal. This means that the obtained data is distorted with noise and imperfections from the analog front-end. As part of the front-end the automatic gain control (AGC) circuitry mainly affects the sensing performance as strong adjacent signals lift the noise level. To enhance the performance of spectrum sensing significantly we focus in this article on techniques to remove the noise caused by the AGC from the sensing data. In order to do this we have applied matrix factorization techniques, i.e., SVD (singular value decomposition) and NMF (non-negative matrix factorization), which enables signal space analysis. In addition, we use live measurement results to verify the performance and to remove the effects of the AGC from the sensing data using above mentioned techniques, i.e., applied on block-wise available spectrum data. In this article it is shown that the occupancy in the industrial, scientific and medical (ISM) band, obtained by using energy detection (ITU recommended threshold), can be an overestimation of spectrum usage by 60%.
NASA Astrophysics Data System (ADS)
Benner, Ronald; Hatcher, Patrick G.; Hedges, John I.
1990-07-01
Changes in the chemical composition of mangrove ( Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed.
Benner, R.; Hatcher, P.G.; Hedges, J.I.
1990-01-01
Changes in the chemical composition of mangrove (Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed. ?? 1990.
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
NASA Astrophysics Data System (ADS)
Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia
2017-10-01
Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.
Analytical electron microscope study of eight ataxites
NASA Technical Reports Server (NTRS)
Novotny, P. M.; Goldstein, J. I.; Williams, D. B.
1982-01-01
Optical and electron optical (SEM, TEM, AEM) techniques were employed to investigate the fine structure of eight ataxite-iron meteorites. Structural studies indicated that the ataxites can be divided into two groups: a Widmanstaetten decomposition group and a martensite decomposition group. The Widmanstaetten decomposition group has a Type I plessite microstructure and the central taenite regions contain highly dislocated lath martensite. The steep M shaped Ni gradients in the taenite are consistent with the fast cooling rates, of not less than 500 C/my, observed for this group. The martensite decomposition group has a Type III plessite microstructure and contains all the chemical group IVB ataxites. The maximum taenite Ni contents vary from 47.5 to 52.7 wt % and are consistent with slow cooling to low temperatures of not greater than 350 C at cooling rates of not greater than 25 C/my.
Data-driven Applications for the Sun-Earth System
NASA Astrophysics Data System (ADS)
Kondrashov, D. A.
2016-12-01
Advances in observational and data mining techniques allow extracting information from the large volume of Sun-Earth observational data that can be assimilated into first principles physical models. However, equations governing Sun-Earth phenomena are typically nonlinear, complex, and high-dimensional. The high computational demand of solving the full governing equations over a large range of scales precludes the use of a variety of useful assimilative tools that rely on applied mathematical and statistical techniques for quantifying uncertainty and predictability. Effective use of such tools requires the development of computationally efficient methods to facilitate fusion of data with models. This presentation will provide an overview of various existing as well as newly developed data-driven techniques adopted from atmospheric and oceanic sciences that proved to be useful for space physics applications, such as computationally efficient implementation of Kalman Filter in radiation belts modeling, solar wind gap-filling by Singular Spectrum Analysis, and low-rank procedure for assimilation of low-altitude ionospheric magnetic perturbations into the Lyon-Fedder-Mobarry (LFM) global magnetospheric model. Reduced-order non-Markovian inverse modeling and novel data-adaptive decompositions of Sun-Earth datasets will be also demonstrated.
NASA Astrophysics Data System (ADS)
Li, Dong; Cheng, Tao; Zhou, Kai; Zheng, Hengbiao; Yao, Xia; Tian, Yongchao; Zhu, Yan; Cao, Weixing
2017-07-01
Red edge position (REP), defined as the wavelength of the inflexion point in the red edge region (680-760 nm) of the reflectance spectrum, has been widely used to estimate foliar chlorophyll content from reflectance spectra. A number of techniques have been developed for REP extraction in the past three decades, but most of them require data-specific parameterization and the consistence of their performance from leaf to canopy levels remains poorly understood. In this study, we propose a new technique (WREP) to extract REPs based on the application of continuous wavelet transform to reflectance spectra. The REP is determined by the zero-crossing wavelength in the red edge region of a wavelet transformed spectrum for a number of scales of wavelet decomposition. The new technique is simple to implement and requires no parameterization from the user as long as continuous wavelet transforms are applied to reflectance spectra. Its performance was evaluated for estimating leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of cereal crops (i.e. rice and wheat) and compared with traditional techniques including linear interpolation, linear extrapolation, polynomial fitting and inverted Gaussian. Our results demonstrated that WREP obtained the best estimation accuracy for both LCC and CCC as compared to traditional techniques. High scales of wavelet decomposition were favorable for the estimation of CCC and low scales for the estimation of LCC. The difference in optimal scale reveals the underlying mechanism of signature transfer from leaf to canopy levels. In addition, crop-specific models were required for the estimation of CCC over the full range. However, a common model could be built with the REPs extracted with Scale 5 of the WREP technique for wheat and rice crops when CCC was less than 2 g/m2 (R2 = 0.73, RMSE = 0.26 g/m2). This insensitivity of WREP to crop type indicates the potential for aerial mapping of chlorophyll content between growth seasons of cereal crops. The new REP extraction technique provides us a new insight for understanding the spectral changes in the red edge region in response to chlorophyll variation from leaf to canopy levels.
Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang
NASA Astrophysics Data System (ADS)
Ikasari, D. M.; Lestari, E. R.; Prastya, E.
2018-03-01
The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.
Method of Suppressing Sublimation in Advanced Thermoelectric Devices
NASA Technical Reports Server (NTRS)
Sakamoto, Jeffrey S. (Inventor); Caillat, Thierry (Inventor); Fleurial, Jean-Pierre (Inventor); Snyder, G. Jeffrey (Inventor)
2009-01-01
A method of applying a physical barrier to suppress thermal decomposition near a surface of a thermoelectric material including applying a continuous metal foil to a predetermined portion of the surface of the thermoelectric material, physically binding the continuous metal foil to the surface of the thermoelectric material using a binding member, and heating in a predetermined atmosphere the applied and physically bound continuous metal foil and the thermoelectric material to a sufficient temperature in order to promote bonding between the continuous metal foil and the surface of the thermoelectric material. The continuous metal foil forms a physical barrier to enclose a predetermined portion of the surface. Thermal decomposition is suppressed at the surface of the thermoelectric material enclosed by the physical barrier when the thermoelectric element is in operation.
Debnath, M; Santoni, C; Leonardi, S; Iungo, G V
2017-04-13
The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures
NASA Astrophysics Data System (ADS)
Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en
2015-08-01
Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.
Du, Jingjing; Zhang, Yuyan; Guo, Wei; Li, Ningyun; Gao, Chaoshuai; Cui, Minghui; Lin, Zhongdian; Wei, Mingbao; Zhang, Hongzhong
2018-05-15
Titanium dioxide (TiO 2 ) nanoparticles have been applied in diverse commercial products, which could lead to toxic effects on aquatic microbes and would inhibit some important ecosystem processes. The study aimed to investigate the chronic impacts of TiO 2 nanoparticles with different concentrations (5, 50, and 500 mg L -1 ) on Populus nigra L. leaf decomposition in the freshwater ecosystem. After 50 d of decomposing, a significant decrease in decomposition rates was observed with higher concentrations of TiO 2 nanoparticles. During the period of litter decomposition, exposure of TiO 2 nanoparticles led to decreases in extracellular enzyme activities, which was caused by the reduction of microbial especially fungal biomass. In addition, the diversity and composition of the fungal community associated with litter decomposition were strongly affected by the concentrations of TiO 2 nanoparticles. The diversity and composition of the fungal community associated with litter decomposition was strongly affected. The abundance of Tricladium chaetocladium decreased with the increasing concentrations of TiO 2 nanoparticles, indicating the little contribution of the species to the litter decomposition. In conclusion, this study provided the evidence for the chronic exposure effects of TiO 2 nanoparticles on the litter decomposition and further the functions of freshwater ecosystems. Copyright © 2018 Elsevier B.V. All rights reserved.
Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview
NASA Astrophysics Data System (ADS)
Han, G.; Lin, B.; Xu, Z.
2017-03-01
Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.
NASA Astrophysics Data System (ADS)
Bo, Zheng; Hao, Han; Yang, Shiling; Zhu, Jinhui; Yan, Jianhua; Cen, Kefa
2018-04-01
This work reports the catalytic performance of vertically-oriented graphenes (VGs) supported manganese oxide catalysts toward toluene decomposition in post plasma-catalysis (PPC) system. Dense networks of VGs were synthesized on carbon paper (CP) via a microwave plasma-enhanced chemical vapor deposition (PECVD) method. A constant current approach was applied in a conventional three-electrode electrochemical system for the electrodeposition of Mn3O4 catalysts on VGs. The as-obtained catalysts were characterized and investigated for ozone conversion and toluene decomposition in a PPC system. Experimental results show that the Mn3O4 catalyst loading mass on VG-coated CP was significantly higher than that on pristine CP (almost 1.8 times for an electrodeposition current of 10 mA). Moreover, the decoration of VGs led to both enhanced catalytic activity for ozone conversion and increased toluene decomposition, exhibiting a great promise in PPC system for the effective decomposition of volatile organic compounds.
Comparative kinetic analysis on thermal degradation of some cephalosporins using TG and DSC data
2013-01-01
Background The thermal decomposition of cephalexine, cefadroxil and cefoperazone under non-isothermal conditions using the TG, respectively DSC methods, was studied. In case of TG, a hyphenated technique, including EGA, was used. Results The kinetic analysis was performed using the TG and DSC data in air for the first step of cephalosporin’s decomposition at four heating rates. The both TG and DSC data were processed according to an appropriate strategy to the following kinetic methods: Kissinger-Akahira-Sunose, Friedman, and NPK, in order to obtain realistic kinetic parameters, even if the decomposition process is a complex one. The EGA data offer some valuable indications about a possible decomposition mechanism. The obtained data indicate a rather good agreement between the activation energy’s values obtained by different methods, whereas the EGA data and the chemical structures give a possible explanation of the observed differences on the thermal stability. A complete kinetic analysis needs a data processing strategy using two or more methods, but the kinetic methods must also be applied to the different types of experimental data (TG and DSC). Conclusion The simultaneous use of DSC and TG data for the kinetic analysis coupled with evolved gas analysis (EGA) provided us a more complete picture of the degradation of the three cephalosporins. It was possible to estimate kinetic parameters by using three different kinetic methods and this allowed us to compare the Ea values obtained from different experimental data, TG and DSC. The thermodegradation being a complex process, the both differential and integral methods based on the single step hypothesis are inadequate for obtaining believable kinetic parameters. Only the modified NPK method allowed an objective separation of the temperature, respective conversion influence on the reaction rate and in the same time to ascertain the existence of two simultaneous steps. PMID:23594763
Mitigation of Manhole Events Caused by Secondary Cable Failure
NASA Astrophysics Data System (ADS)
Zhang, Lili
"Manhole event" refers to a range of phenomena, such as smokers, fires and explosions which occur on underground electrical infrastructure, primarily in major cities. The most common cause of manhole events is decomposition of secondary cable initiated by an electric fault. The work presented in this thesis addresses various aspects related to the evolution and mitigation of the manhole events caused by secondary cable insulation failure. Manhole events develop as a result of thermal decomposition of organic materials present in the cable duct and manholes. Polymer characterization techniques are applied to intensively study the materials properties as related to manhole events, mainly the thermal decomposition behaviors of the polymers present in the cable duct. Though evolved gas analysis, the combustible gases have been quantitatively identified. Based on analysis and knowledge of field conditions, manhole events is divided into at least two classes, those in which exothermic chemical reactions dominate and those in which electrical energy dominates. The more common form of manhole event is driven by air flow down the duct. Numerical modeling of smolder propagation in the cable duct demonstrated that limiting air flow is effective in reducing the generation rate of combustible gas, in other words, limiting manhole events to relatively minor "smokers". Besides manhole events, another by-product of secondary cable insulation breakdown is stray voltage. The danger to personnel due to stray voltage is mostly caused by the 'step potential'. The amplitude of step potential as a result of various types of insulation defects is calculated using Finite Element Analysis (FEA) program.
Discrete wavelet transform: a tool in smoothing kinematic data.
Ismail, A R; Asfour, S S
1999-03-01
Motion analysis systems typically introduce noise to the displacement data recorded. Butterworth digital filters have been used to smooth the displacement data in order to obtain smoothed velocities and accelerations. However, this technique does not yield satisfactory results, especially when dealing with complex kinematic motions that occupy the low- and high-frequency bands. The use of the discrete wavelet transform, as an alternative to digital filters, is presented in this paper. The transform passes the original signal through two complementary low- and high-pass FIR filters and decomposes the signal into an approximation function and a detail function. Further decomposition of the signal results in transforming the signal into a hierarchy set of orthogonal approximation and detail functions. A reverse process is employed to perfectly reconstruct the signal (inverse transform) back from its approximation and detail functions. The discrete wavelet transform was applied to the displacement data recorded by Pezzack et al., 1977. The smoothed displacement data were twice differentiated and compared to Pezzack et al.'s acceleration data in order to choose the most appropriate filter coefficients and decomposition level on the basis of maximizing the percentage of retained energy (PRE) and minimizing the root mean square error (RMSE). Daubechies wavelet of the fourth order (Db4) at the second decomposition level showed better results than both the biorthogonal and Coiflet wavelets (PRE = 97.5%, RMSE = 4.7 rad s-2). The Db4 wavelet was then used to compress complex displacement data obtained from a noisy mathematically generated function. Results clearly indicate superiority of this new smoothing approach over traditional filters.
Hu, Yuntao; Zheng, Qing; Wanek, Wolfgang
2017-09-05
Soil fluxomics analysis can provide pivotal information for understanding soil biochemical pathways and their regulation, but direct measurement methods are rare. Here, we describe an approach to measure soil extracellular metabolite (amino sugar and amino acid) concentrations and fluxes based on a 15 N isotope pool dilution technique via liquid chromatography and high-resolution mass spectrometry. We produced commercially unavailable 15 N and 13 C labeled amino sugars and amino acids by hydrolyzing peptidoglycan isolated from isotopically labeled bacterial biomass and used them as tracers ( 15 N) and internal standards ( 13 C). High-resolution (Orbitrap Exactive) MS with a resolution of 50 000 allowed us to separate different stable isotope labeled analogues across a large range of metabolites. The utilization of 13 C internal standards greatly improved the accuracy and reliability of absolute quantification. We successfully applied this method to two types of soils and quantified the extracellular gross fluxes of 2 amino sugars, 18 amino acids, and 4 amino acid enantiomers. Compared to the influx and efflux rates of most amino acids, similar ones were found for glucosamine, indicating that this amino sugar is released through peptidoglycan and chitin decomposition and serves as an important nitrogen source for soil microorganisms. d-Alanine and d-glutamic acid derived from peptidoglycan decomposition exhibited similar turnover rates as their l-enantiomers. This novel approach offers new strategies to advance our understanding of the production and transformation pathways of soil organic N metabolites, including the unknown contributions of peptidoglycan and chitin decomposition to soil organic N cycling.
Microencapsulation of Flavors in Carnauba Wax
Milanovic, Jelena; Manojlovic, Verica; Levic, Steva; Rajic, Nevenka; Nedovic, Viktor; Bugarski, Branko
2010-01-01
The subject of this study is the development of flavor wax formulations aimed for food and feed products. The melt dispersion technique was applied for the encapsulation of ethyl vanillin in wax microcapsules. The surface morphology of microparticles was investigated using scanning electron microscope (SEM), while the loading content was determined by HPLC measurements. This study shows that the decomposition process under heating proceeds in several steps: vanilla evaporation occurs at around 200 °C, while matrix degradation starts at 250 °C and progresses with maxima at around 360, 440 and 520 °C. The results indicate that carnauba wax is an attractive material for use as a matrix for encapsulation of flavours in order to improve their functionality and stability in products. PMID:22315575
Microencapsulation of flavors in carnauba wax.
Milanovic, Jelena; Manojlovic, Verica; Levic, Steva; Rajic, Nevenka; Nedovic, Viktor; Bugarski, Branko
2010-01-01
The subject of this study is the development of flavor wax formulations aimed for food and feed products. The melt dispersion technique was applied for the encapsulation of ethyl vanillin in wax microcapsules. The surface morphology of microparticles was investigated using scanning electron microscope (SEM), while the loading content was determined by HPLC measurements. This study shows that the decomposition process under heating proceeds in several steps: vanilla evaporation occurs at around 200 °C, while matrix degradation starts at 250 °C and progresses with maxima at around 360, 440 and 520 °C. The results indicate that carnauba wax is an attractive material for use as a matrix for encapsulation of flavours in order to improve their functionality and stability in products.
Fractional-order Fourier analysis for ultrashort pulse characterization.
Brunel, Marc; Coetmellec, Sébastien; Lelek, Mickael; Louradour, Frédéric
2007-06-01
We report what we believe to be the first experimental demonstration of ultrashort pulse characterization using fractional-order Fourier analysis. The analysis is applied to the interpretation of spectral interferometry resolved in time (SPIRIT) traces [which are spectral phase interferometry for direct electric field reconstruction (SPIDER)-like interferograms]. First, the fractional-order Fourier transformation is shown to naturally allow the determination of the cubic spectral phase coefficient of pulses to be analyzed. A simultaneous determination of both cubic and quadratic spectral phase coefficients of the pulses using the fractional-order Fourier series expansion is further demonstrated. This latter technique consists of localizing relative maxima in a 2D cartography representing decomposition coefficients. It is further used to reconstruct or filter SPIRIT traces.
NASA Astrophysics Data System (ADS)
Hayeemasae, N.; Surya, I.; Ismail, H.
2018-02-01
This paper deals with the morphology and thermal stability of nano Titanium Dioxide (TiO2) filled natural rubber composites. This study also suggests a new method of incorporating TiO2. Aqueous dispersions of nano TiO2at the loadings of 0, 2, 4, 6 and 8 phr were dispersed in natural rubber latex, the resulting compounds were then dried prior to mixing it with other ingredients on a two-roll mill. By applying this technique, the homogeneity of the compound is significantly improved. This can be clearly seen from the morphology observed. Adding TiO2 results in shifting the decomposition temperature and char residue irrespective of the loadings of nano TiO2.
NASA Astrophysics Data System (ADS)
Tallant, D. R.; Jungst, R. G.
1981-04-01
A dual base diode laser spectrometer was constructed using off axis reflective optics. The spectrometer was amplitude modulated for direct absorption measurements or frequency modulated to obtain derivative spectra. The spectrometer had: high throughput; was easy to operate and align; provided good dual beam compensation; and had no evidence of the interference effects that were observed in diode laser spectrometers using refractive optics. Unpurged, using second derivative techniques, the instrument measured 108 parts per million CO (10/cm absorption cell, atmospheric pressure broadened) with good signal/noise. With the replacement of marginal instrumental components, the signal/noise was substantially increased. This instrument was developed to monitor the evolution of decomposition gases in sealed containers of small volume at atmospheric pressure.
NASA Astrophysics Data System (ADS)
Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy
2018-04-01
In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.
Catalytic effect on ultrasonic decomposition of cellulose
NASA Astrophysics Data System (ADS)
Nomura, Shinfuku; Wakida, Kousuke; Mukasa, Shinobu; Toyota, Hiromichi
2018-07-01
Cellulase used as a catalyst is introduced into the ultrasonic welding method for cellulose decomposition in order to obtain glucose. By adding cellulase in the welding process, filter paper decomposes cellulose into glucose, 5-hydroxymethylfurfural (5-HMF), furfural, and oligosaccharides. The amount of glucose from hydrolysis was increased by ultrasonic welding in filter paper immersed in water. Most glucose was obtained by 100 W ultrasonic irradiation; however, when was applied 200 W, the dehydration of the glucose itself occurred, and was converted into 5-HMF owing to the thermolysis of ultrasonics. Therefore, there is an optimum welding power for the production of glucose from cellulose decomposition.
Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J.C.H.; Lung, H.; Katsumata, Y.
1995-12-01
In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.
Watermarking scheme based on singular value decomposition and homomorphic transform
NASA Astrophysics Data System (ADS)
Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu
2017-10-01
A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.
Castellanos-Barliza, Jeiner; León Peláez, Juan Diego
2011-03-01
Several factors control the decomposition in terrestrial ecosystems such as humidity, temperature, quality of litter and microbial activity. We investigated the effects of rainfall and soil plowing prior to the establishment of Acacia mangium plantations, using the litterbag technique, during a six month period, in forests plantations in Bajo Cauca region, Colombia. The annual decomposition constants (k) of simple exponential model, oscillated between 1.24 and 1.80, meanwhile k1 y k2 decomposition constants of double exponential model were 0.88-1.81 and 0.58-7.01. At the end of the study, the mean residual dry matter (RDM) was 47% of the initial value for the three sites. We found a slow N, Ca and Mg release pattern from the A. mangium leaf litter, meanwhile, phosphorus (P) showed a dominant immobilization phase, suggesting its low availability in soils. Chemical leaf litter quality parameters (e.g. N and P concentrations, C/N, N/P ratios and phenols content) showed an important influence on decomposition rates. The results of this study indicated that rainfall plays an important role on the decomposition process, but not soil plowing.
NDMA formation by chloramination of ranitidine: kinetics and mechanism.
Roux, Julien Le; Gallard, Hervé; Croué, Jean-Philippe; Papot, Sébastien; Deborde, Marie
2012-10-16
The kinetics of decomposition of the pharmaceutical ranitidine (a major precursor of NDMA) during chloramination was investigated and some decomposition byproducts were identified by using high performance liquid chromatography coupled with mass spectrometry (HPLC-MS). The reaction between monochloramine and ranitidine followed second order kinetics and was acid-catalyzed. Decomposition of ranitidine formed different byproducts depending on the applied monochloramine concentration. Most identified products were chlorinated and hydroxylated analogues of ranitidine. In excess of monochloramine, nucleophilic substitution between ranitidine and monochloramine led to byproducts that are critical intermediates involved in the formation of NDMA, for example, a carbocation formed from the decomposition of the methylfuran moiety of ranitidine. A complete mechanism is proposed to explain the high formation yield of NDMA from chloramination of ranitidine. These results are of great importance to understand the formation of NDMA by chloramination of tertiary amines.
Campos, Xochi; Germino, Matthew; de Graaff, Marie-Anne
2017-01-01
AimsChanging precipitation regimes in semiarid ecosystems will affect the balance of soil carbon (C) input and release, but the net effect on soil C storage is unclear. We asked how changes in the amount and timing of precipitation affect litter decomposition, and soil C stabilization in semiarid ecosystems.MethodsThe study took place at a long-term (18 years) ecohydrology experiment located in Idaho. Precipitation treatments consisted of a doubling of annual precipitation (+200 mm) added either in the cold-dormant season or in the growing season. Experimental plots were planted with big sagebrush (Artemisia tridentata), or with crested wheatgrass (Agropyron cristatum). We quantified decomposition of sagebrush leaf litter, and we assessed organic soil C (SOC) in aggregates, and silt and clay fractions.ResultsWe found that: (1) increased precipitation applied in the growing season consistently enhanced decomposition rates relative to the ambient treatment, and (2) precipitation applied in the dormant season enhanced soil C stabilization.ConclusionsThese data indicate that prolonged increases in precipitation can promote soil C storage in semiarid ecosystems, but only if these increases happen at times of the year when conditions allow for precipitation to promote plant C inputs rates to soil.
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
Investigation of automated task learning, decomposition and scheduling
NASA Technical Reports Server (NTRS)
Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.
1990-01-01
The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.
Vertical decomposition with Genetic Algorithm for Multiple Sequence Alignment
2011-01-01
Background Many Bioinformatics studies begin with a multiple sequence alignment as the foundation for their research. This is because multiple sequence alignment can be a useful technique for studying molecular evolution and analyzing sequence structure relationships. Results In this paper, we have proposed a Vertical Decomposition with Genetic Algorithm (VDGA) for Multiple Sequence Alignment (MSA). In VDGA, we divide the sequences vertically into two or more subsequences, and then solve them individually using a guide tree approach. Finally, we combine all the subsequences to generate a new multiple sequence alignment. This technique is applied on the solutions of the initial generation and of each child generation within VDGA. We have used two mechanisms to generate an initial population in this research: the first mechanism is to generate guide trees with randomly selected sequences and the second is shuffling the sequences inside such trees. Two different genetic operators have been implemented with VDGA. To test the performance of our algorithm, we have compared it with existing well-known methods, namely PRRP, CLUSTALX, DIALIGN, HMMT, SB_PIMA, ML_PIMA, MULTALIGN, and PILEUP8, and also other methods, based on Genetic Algorithms (GA), such as SAGA, MSA-GA and RBT-GA, by solving a number of benchmark datasets from BAliBase 2.0. Conclusions The experimental results showed that the VDGA with three vertical divisions was the most successful variant for most of the test cases in comparison to other divisions considered with VDGA. The experimental results also confirmed that VDGA outperformed the other methods considered in this research. PMID:21867510
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System
NASA Astrophysics Data System (ADS)
Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang
2018-01-01
This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.
Solid-state reaction kinetics of neodymium doped magnesium hydrogen phosphate system
NASA Astrophysics Data System (ADS)
Gupta, Rashmi; Slathia, Goldy; Bamzai, K. K.
2018-05-01
Neodymium doped magnesium hydrogen phosphate (NdMHP) crystals were grown by using gel encapsulation technique. Structural characterization of the grown crystals has been carried out by single crystal X-ray diffraction (XRD) and it revealed that NdMHP crystals crystallize in orthorhombic crystal system with space group Pbca. Kinetics of the decomposition of the grown crystals has been studied by non-isothermal analysis. The estimation of decomposition temperatures and weight loss has been made from the thermogravimetric/differential thermo analytical (TG/DTA) in conjuncture with DSC studies. The various steps involved in the thermal decomposition of the material have been analysed using Horowitz-Metzger, Coats-Redfern and Piloyan-Novikova equations for evaluating various kinetic parameters.
Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Fatoohi, Rod A.
1990-01-01
The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.
Temporally flickering nanoparticles for compound cellular imaging and super resolution
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Danan, Yossef; Meir, Rinat; Meiri, Amihai; Zalevsky, Zeev
2016-03-01
This work presents the use of flickering nanoparticles for imaging biological samples. The method has high noise immunity, and it enables the detection of overlapping types of GNPs, at significantly sub-diffraction distances, making it attractive for super resolving localization microscopy techniques. The method utilizes a lock-in technique at which the imaging of the sample is done using a time-modulated laser beam that match the number of the types of gold nanoparticles (GNPs) that label a given sample, and resulting in the excitation of the temporal flickering of the scattered light at known temporal frequencies. The final image where the GNPs are spatially separated is obtained using post processing where the proper spectral components corresponding to the different modulation frequencies are extracted. This allows the simultaneous super resolved imaging of multiple types of GNPs that label targets of interest within biological samples. Additionally applying the post-processing algorithm of the K-factor image decomposition algorithm can further improve the performance of the proposed approach.
Micromorphology of pelletized soil conditioners
NASA Astrophysics Data System (ADS)
Hirsch, Florian; Dietrich, Nils; Knoop, Christine; Raab, Thomas
2017-04-01
Soil conditioners produced by anaerobic digestion and subsequent composting of organic household waste, bear the potential to improve unproductive farmland together with a reduced input risk of unwanted pollutants into the soils. Within the VeNGA project (http://www.biogas-network.de/venga), soil conditioners from anaerobically digested organic household waste are tested for their potential to increase plant growth in glasshouse and field experiments. Because the production techniques of these soil conditioners may influence their physical and chemical behaviour in the soil, two different techniques for pelletizing the soil conditioners where applied. We present findings from a pot experiment with cereal that has been sampled after two months for micromorphological analyses. We visualize the decomposition and the physical behaviour of the soil conditioners. Pellets produced in an agglomeration mixer result in dense balls, that are only slightly decomposed after the trial. But the soil conditioners created under pressure in a screw extruder are rich in voids and have the potential of retaining more soil water.
Hydrothermal Oxidation of Fecal Sludge: Experimental Investigations and Kinetic Modeling.
Hübner, Tobias; Roth, Markus; Vogel, Frédéric
2016-11-23
Hydrothermal oxidation (HTO) provides an efficient technique to completely destroy wet organic wastes. In this study, HTO was applied to treat fecal sludge at well-defined experimental conditions. Four different kinetic models were adjusted to the obtained data. Among others, a distributed activation energy model (DAEM) was applied. A total of 33 experiments were carried out in an unstirred batch reactor with pressurized air as the oxidant at temperatures of <470 °C, oxygen-to-fuel equivalence ratios between 0 and 1.9, feed concentrations between 3.9 and 9.8 mol TOC L -1 (TOC = total organic carbon), and reaction times between 86 and 1572 s. Decomposition of the fecal sludge was monitored by means of the conversion of TOC to CO 2 and CO. In the presence of oxygen, ignition of the reaction was observed around 300 °C, followed by further rapid decomposition of the organic material. The TOC was completely decomposed to CO 2 within 25 min at 470 °C and an oxygen-to-fuel equivalence ratio of 1.2. CO was formed as an intermediate product, and no other combustible products were found in the gas. At certain reaction conditions, the formation of unwanted coke and tarlike products occurred. The reaction temperature and oxygen-to-fuel equivalence ratio showed a significant influence on TOC conversion, while the initial TOC concentration did not. Conversion of TOC to CO 2 could be well described with a first-order rate law and an activation energy of 39 kJ mol -1 .
Hydrothermal Oxidation of Fecal Sludge: Experimental Investigations and Kinetic Modeling
2016-01-01
Hydrothermal oxidation (HTO) provides an efficient technique to completely destroy wet organic wastes. In this study, HTO was applied to treat fecal sludge at well-defined experimental conditions. Four different kinetic models were adjusted to the obtained data. Among others, a distributed activation energy model (DAEM) was applied. A total of 33 experiments were carried out in an unstirred batch reactor with pressurized air as the oxidant at temperatures of <470 °C, oxygen-to-fuel equivalence ratios between 0 and 1.9, feed concentrations between 3.9 and 9.8 molTOC L–1 (TOC = total organic carbon), and reaction times between 86 and 1572 s. Decomposition of the fecal sludge was monitored by means of the conversion of TOC to CO2 and CO. In the presence of oxygen, ignition of the reaction was observed around 300 °C, followed by further rapid decomposition of the organic material. The TOC was completely decomposed to CO2 within 25 min at 470 °C and an oxygen-to-fuel equivalence ratio of 1.2. CO was formed as an intermediate product, and no other combustible products were found in the gas. At certain reaction conditions, the formation of unwanted coke and tarlike products occurred. The reaction temperature and oxygen-to-fuel equivalence ratio showed a significant influence on TOC conversion, while the initial TOC concentration did not. Conversion of TOC to CO2 could be well described with a first-order rate law and an activation energy of 39 kJ mol–1. PMID:28539700
Tailored multivariate analysis for modulated enhanced diffraction
Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni; ...
2015-10-21
Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. Furthermore, the multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). Furthermore, when applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. In order to develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less
NASA Technical Reports Server (NTRS)
Huff, Timothy L.; Griffin, Dennis E. (Technical Monitor)
2001-01-01
Thermogravimetric analysis (TGA) is widely employed in the thermal characterization of non-metallic materials, yielding valuable information on decomposition characteristics of a sample over a wide temperature range. However, a potential wealth of chemical information is lost during the process, with the evolving gases generated during thermal decomposition escaping through the exhaust line. Fourier Transform-Infrared spectroscopy (FT-IR) is a powerful analytical technique for determining many chemical constituents while in any material state, in this application, the gas phase. By linking these two techniques, evolving gases generated during the TGA process are directed into an appropriately equipped infrared spectrometer for chemical speciation. Consequently, both thermal decomposition and chemical characterization of a material may be obtained in a single sample run. In practice, a heated transfer line is employed to connect the two instruments while a purge gas stream directs the evolving gases into the FT-IR, The purge gas can be either high purity air or an inert gas such as nitrogen to allow oxidative and pyrolytic processes to be examined, respectively. The FT-IR data is collected real-time, allowing continuous monitoring of chemical compositional changes over the course of thermal decomposition. Using this coupled technique, an array of diverse materials has been examined, including composites, plastics, rubber, fiberglass epoxy resins, polycarbonates, silicones, lubricants and fluorocarbon materials. The benefit of combining these two methodologies is of particular importance in the aerospace community, where newly developing materials have little available data with which to refer. By providing both thermal and chemical data simultaneously, a more definitive and comprehensive characterization of the material is possible. Additionally, this procedure has been found to be a viable screening technique for certain materials, with the generated data useful in the selection of other appropriate analytical procedures for further material characterization.
Dong, Yang; Qi, Ji; He, Honghui; He, Chao; Liu, Shaoxiong; Wu, Jian; Elson, Daniel S; Ma, Hui
2017-08-01
Polarization imaging has been recognized as a potentially powerful technique for probing the microstructural information and optical properties of complex biological specimens. Recently, we have reported a Mueller matrix microscope by adding the polarization state generator and analyzer (PSG and PSA) to a commercial transmission-light microscope, and applied it to differentiate human liver and cervical cancerous tissues with fibrosis. In this paper, we apply the Mueller matrix microscope for quantitative detection of human breast ductal carcinoma samples at different stages. The Mueller matrix polar decomposition and transformation parameters of the breast ductal tissues in different regions and at different stages are calculated and analyzed. For more quantitative comparisons, several widely-used image texture feature parameters are also calculated to characterize the difference in the polarimetric images. The experimental results indicate that the Mueller matrix microscope and the polarization parameters can facilitate the quantitative detection of breast ductal carcinoma tissues at different stages.
Nanoscale precipitation in a maraging steel studied by APFIM.
Stiller, Krystyna; Hättestrand, Mats
2004-06-01
This article summarizes findings from our previous investigations and recent studies concerning precipitation in a maraging steel of type 13Cr-9Ni-2Mo-2Cu (at.%) with small additions of Ti (1 at.%) and Al (0.7 at.%). The material was investigated after aging at 475 degrees C up to 400 h using both conventional and three-dimensional atom-probe analyses. The process of phase decomposition in the steel proved to be complicated. It consisted of precipitation of several phases with different chemistry. A Cu-rich phase was first to precipitate and Mo was last in the precipitation sequence. The influence of the complex precipitation path on the material properties is discussed. The investigation clearly demonstrated the usefulness of the applied techniques for investigation of nanoscale precipitation. It is also shown that, complementary methods (such as TEM and EFTEM) giving structural and chemical information on a larger scale must be applied to explain the good properties of the steel after prolonged aging.
Soil organic matter decomposition follows plant productivity response to sea-level rise
NASA Astrophysics Data System (ADS)
Mueller, Peter; Jensen, Kai; Megonigal, James Patrick
2015-04-01
The accumulation of soil organic matter (SOM) is an important mechanism for many tidal wetlands to keep pace with sea-level rise. SOM accumulation is governed by the rates of production and decomposition of organic matter. While plant productivity responses to sea-level rise are well understood, far less is known about the response of SOM decomposition to accelerated sea-level rise. Here we quantified the effects of sea-level rise on SOM decomposition by exposing planted and unplanted tidal marsh monoliths to experimentally manipulated flood duration. The study was performed in a field-based mesocosm facility at the Smithsonian Global Change Research Wetland, a micro tidal brackish marsh in Maryland, US. SOM decomposition was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated using a stable carbon isotope approach. Despite the dogma that decomposition rates are inversely related to flooding, SOM mineralization was not sensitive to varying flood duration over a 35 cm range in surface elevation in unplanted mesocoms. In the presence of plants, decomposition rates were strongly and positively related to aboveground biomass (p≤0.01, R2≥0.59). We conclude that rates of soil carbon loss through decomposition are driven by plant responses to sea level in this intensively studied tidal marsh. If our result applies more generally to tidal wetlands, it has important implications for modeling carbon sequestration and marsh accretion in response to accelerated sea-level rise.
An analysis of scatter decomposition
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1990-01-01
A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Boll, Daniel T; Marin, Daniele; Redmon, Grace M; Zink, Stephen I; Merkle, Elmar M
2010-04-01
The purpose of our study was to evaluate whether two-point Dixon MRI using a 2D decomposition technique facilitates metabolite differentiation between lipids and iron in standardized in vitro liver phantoms with in vivo patient validation and allows semiquantitative in vitro assessment of metabolites associated with steatosis, iron overload, and combined disease. The acrylamide-based phantoms were made to reproduce the T1- and T2-weighted MRI appearances of physiologic hepatic parenchyma and hepatic steatosis-iron overload by the admixture of triglycerides and ferumoxides. Combined disease was simulated using joint admixtures of triglycerides and ferumoxides at various concentrations. For phantom validation, 30 patients were included, of whom 10 had steatosis, 10 had iron overload, and 10 had no liver disease. For MRI an in-phase/opposed-phase T1-weighted sequence with TR/TE(opposed-phase)/TE(in-phase) of 4.19/1.25/2.46 was used. Fat/water series were obtained by Dixon-based algorithms. In-phase and opposed-phase and fat/water ratios were calculated. Statistical cluster analysis assessed ratio pairs of physiologic liver, steatosis, iron overload, and combined disease in 2D metabolite discrimination plots. Statistical assessment proved that metabolite decomposition in phantoms simulating steatosis (1.77|0.22; in-phase/opposed-phase|fat/water ratios), iron overload (0.75|0.21), and healthy control subjects (1.09|0.05) formed three clusters with distinct ratio pairs. Patient validation for hepatic steatosis (3.29|0.51), iron overload (0.56|0.41), and normal control subjects (0.99|0.05) confirmed this clustering (p < 0.001). One-dimensional analysis assessing in vitro combined disease only with in-phase/opposed-phase ratios would have failed to characterize metabolites. The 2D analysis plotting in-phase/opposed-phase and fat/water ratios (2.16|0.59) provided accurate semiquantitative metabolite decomposition (p < 0.001). MR Dixon imaging facilitates metabolite decomposition of intrahepatic lipids and iron using in vitro phantoms with in vivo patient validation. The proposed decomposition technique identified distinct in-phase/opposed-phase and fat/water ratios for in vitro steatosis, iron overload, and combined disease.
NASA Astrophysics Data System (ADS)
Fontaine, Joseph Henry
The focus of this dissertation is the development of an Unmanned Undersea Vehicle (UUV) liquid propellant employing Hydroxyl Ammonium Nitrate (HAN) as the oxidizer. Hydroxyl Ammonium Nitrate is a highly acidic aqueous based liquid oxidizer. Therefore, in order to achieve efficient combustion of a propellant using this oxidizer, the fuel must be highly water soluble and compatible with the oxidizer to prevent a premature ignition prior to being heated within the combustion chamber. An extensive search of the fuel to be used with this oxidizer was conducted. Propylene glycol was chosen as the fuel for this propellant, and the propellant given the name RF-402. The propellant development process will first evaluate the propellants thermal stability and kinetic parameters using a Differential Scanning Calorimeter (DSC). The purpose of the thermal stability analysis is to determine the temperature at which the propellant decomposition begins for the future safe handling of the propellant and the optimization of the combustion chamber. Additionally, the thermogram results will provide information regarding any undesirable endotherms prior to the decomposition and whether or not the decomposition process is a multi-step process. The Arrhenius type kinetic parameters will be determined using the ASTM method for thermally unstable materials. The activation energy and pre-exponential factor of the propellant will be determined by evaluating the decomposition peak temperature over a temperature scan rate ranging from 1°C per minute to 10°C per minute. The kinetic parameters of the propellant will be compared to those of 81 wt% HAN to determine if the HAN decomposition is controlling the overall decomposition of the propellant RF-402. The lifetime of individual droplets will be analyzed using both experimental and theoretical techniques. The theoretical technique will involve modeling the lifetime of an individual droplet in a combustion chamber like operating environment. The experimental technique will consist of subjecting droplets suspended from a fine gauge thermocouple to an instantaneous hot gas source and recording its temperature response while imaging it using a high power video microscope to determine the physical response of the droplet. This analysis will be the foundation for all future efforts in developing a propulsion system employing the use of RF-402.
Yang, Chia Cheng; Chang, Shu Hao; Hong, Bao Zhen; Chi, Kai Hsien; Chang, Moo Been
2008-10-01
Development of effective PCDD/F (polychlorinated dibenzo-p-dioxin and dibenzofuran) control technologies is essential for environmental engineers and researchers. In this study, a PCDD/F-containing gas stream generating system was developed to investigate the efficiency and effectiveness of innovative PCDD/F control technologies. The system designed and constructed can stably generate the gas stream with the PCDD/F concentration ranging from 1.0 to 100ng TEQ Nm(-3) while reproducibility test indicates that the PCDD/F recovery efficiencies are between 93% and 112%. This new PCDD/F-containing gas stream generating device is first applied in the investigation of the catalytic PCDD/F control technology. The catalytic decomposition of PCDD/Fs was evaluated with two types of commercial V(2)O(5)-WO(3)/TiO(2)-based catalysts (catalyst A and catalyst B) at controlled temperature, water vapor content, and space velocity. 84% and 91% PCDD/F destruction efficiencies are achieved with catalysts A and B, respectively, at 280 degrees C with the space velocity of 5000h(-1). The results also indicate that the presence of water vapor inhibits PCDD/F decomposition due to its competition with PCDD/F molecules for adsorption on the active vanadia sites for both catalysts. In addition, this study combined integral reaction and Mars-Van Krevelen model to calculate the activation energies of OCDD and OCDF decomposition. The activation energies of OCDD and OCDF decomposition via catalysis are calculated as 24.8kJmol(-1) and 25.2kJmol(-1), respectively.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-08-01
In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.
Asymmetric optical image encryption using Kolmogorov phase screens and equal modulus decomposition
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Bhaduri, Basanta; Quan, Chenggen
2017-11-01
An asymmetric technique for optical image encryption is proposed using Kolmogorov phase screens (KPSs) and equal modulus decomposition (EMD). The KPSs are generated using the power spectral density of Kolmogorov turbulence. The input image is first randomized and then Fresnel propagated with distance d. Further, the output in the Fresnel domain is modulated with a random phase mask, and the gyrator transform (GT) of the modulated image is obtained with an angle α. The EMD is operated on the GT spectrum to get the complex images, Z1 and Z2. Among these, Z2 is reserved as a private key for decryption and Z1 is propagated through a medium consisting of four KPSs, located at specified distances, to get the final encrypted image. The proposed technique provides a large set of security keys and is robust against various potential attacks. Numerical simulation results validate the effectiveness and security of the proposed technique.
Domain decomposition methods in aerodynamics
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Saltz, Joel
1990-01-01
Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.
New spectrophotometric assay for pilocarpine.
El-Masry, S; Soliman, R
1980-07-01
A quick method for the determination of pilocarpine in eye drops in the presence of decomposition products is described. The method involves complexation of the alkaloid with bromocresol purple at pH 6. After treatment with 0.1N NaOH, the liberated dye is measured at 580 nm. The method has a relative standard deviation of 1.99%, and has been successfully applied to the analysis of 2 batches of pilocarpine eye drops. The recommended method was also used to monitor the stability of a pilocarpine nitrate solution in 0.05N NaOH at 65 degrees C. The BPC method failed to detect any significant decomposition after 2 h incubation, but the recommended method revealed 87.5% decomposition.
Singular-value decomposition of a tomosynthesis system
Burvall, Anna; Barrett, Harrison H.; Myers, Kyle J.; Dainty, Christopher
2010-01-01
Tomosynthesis is an emerging technique with potential to replace mammography, since it gives 3D information at a relatively small increase in dose and cost. We present an analytical singular-value decomposition of a tomosynthesis system, which provides the measurement component of any given object. The method is demonstrated on an example object. The measurement component can be used as a reconstruction of the object, and can also be utilized in future observer studies of tomosynthesis image quality. PMID:20940966
Ultrasonic technique for imaging tissue vibrations: preliminary results.
Sikdar, Siddhartha; Beach, Kirk W; Vaezy, Shahram; Kim, Yongmin
2005-02-01
We propose an ultrasound (US)-based technique for imaging vibrations in the blood vessel walls and surrounding tissue caused by eddies produced during flow through narrowed or punctured arteries. Our approach is to utilize the clutter signal, normally suppressed in conventional color flow imaging, to detect and characterize local tissue vibrations. We demonstrate the feasibility of visualizing the origin and extent of vibrations relative to the underlying anatomy and blood flow in real-time and their quantitative assessment, including measurements of the amplitude, frequency and spatial distribution. We present two signal-processing algorithms, one based on phase decomposition and the other based on spectral estimation using eigen decomposition for isolating vibrations from clutter, blood flow and noise using an ensemble of US echoes. In simulation studies, the computationally efficient phase-decomposition method achieved 96% sensitivity and 98% specificity for vibration detection and was robust to broadband vibrations. Somewhat higher sensitivity (98%) and specificity (99%) could be achieved using the more computationally intensive eigen decomposition-based algorithm. Vibration amplitudes as low as 1 mum were measured accurately in phantom experiments. Real-time tissue vibration imaging at typical color-flow frame rates was implemented on a software-programmable US system. Vibrations were studied in vivo in a stenosed femoral bypass vein graft in a human subject and in a punctured femoral artery and incised spleen in an animal model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrens, R.; Minier, L.; Bulusu, S.
1998-12-31
The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed bymore » a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.« less
Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition
NASA Astrophysics Data System (ADS)
Hong, Sang-Hoon; Wdowinski, Shimon
2013-08-01
Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.
Decomposition Odour Profiling in the Air and Soil Surrounding Vertebrate Carrion
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains. PMID:24740412
Decomposition odour profiling in the air and soil surrounding vertebrate carrion.
Forbes, Shari L; Perrault, Katelynn A
2014-01-01
Chemical profiling of decomposition odour is conducted in the environmental sciences to detect malodourous target sources in air, water or soil. More recently decomposition odour profiling has been employed in the forensic sciences to generate a profile of the volatile organic compounds (VOCs) produced by decomposed remains. The chemical profile of decomposition odour is still being debated with variations in the VOC profile attributed to the sample collection technique, method of chemical analysis, and environment in which decomposition occurred. To date, little consideration has been given to the partitioning of odour between different matrices and the impact this has on developing an accurate VOC profile. The purpose of this research was to investigate the decomposition odour profile surrounding vertebrate carrion to determine how VOCs partition between soil and air. Four pig carcasses (Sus scrofa domesticus L.) were placed on a soil surface to decompose naturally and their odour profile monitored over a period of two months. Corresponding control sites were also monitored to determine the VOC profile of the surrounding environment. Samples were collected from the soil below and the air (headspace) above the decomposed remains using sorbent tubes and analysed using gas chromatography-mass spectrometry. A total of 249 compounds were identified but only 58 compounds were common to both air and soil samples. This study has demonstrated that soil and air samples produce distinct subsets of VOCs that contribute to the overall decomposition odour. Sample collection from only one matrix will reduce the likelihood of detecting the complete spectrum of VOCs, which further confounds the issue of determining a complete and accurate decomposition odour profile. Confirmation of this profile will enhance the performance of cadaver-detection dogs that are tasked with detecting decomposition odour in both soil and air to locate victim remains.
NASA Astrophysics Data System (ADS)
Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin
2017-11-01
Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.
Mechanical and Assembly Units of Viral Capsids Identified via Quasi-Rigid Domain Decomposition
Polles, Guido; Indelicato, Giuliana; Potestio, Raffaello; Cermelli, Paolo; Twarock, Reidun; Micheletti, Cristian
2013-01-01
Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV) for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available. PMID:24244139
Effect of applied strain on phase separation of Fe-28 at.% Cr alloy: 3D phase-field simulation
NASA Astrophysics Data System (ADS)
Zhu, Lihui; Li, Yongsheng; Liu, Chengwei; Chen, Shi; Shi, Shujing; Jin, Shengshun
2018-04-01
A quantitative simulation of the separation of the α‧ phase in Fe-28 at.% Cr alloy under the effects of applied strain is performed by utilizing a three-dimensional phase-field model. The elongation of the Cr-enriched α‧ phase becomes obvious with the influence of applied uniaxial strain for the phase separation transforms from spinodal decomposition of 700 K to nucleation and growth of 773 K. The applied strain shows a significant influence on the early stage phase separation, and the influence is enlarged with the elevated temperature. The steady-state coarsening with the mechanism of spinodal decomposition is substantially affected by the applied strain for low-temperature aging, while the influence is reduced as the temperature increases and as the phase separation mechanism changes to nucleation and growth. The peak value of particle size distribution decreases, and the PSD for 773 K becomes more widely influenced by the applied strain. The simulation results of separation of the Cr-enriched α‧ phase with the applied strain provide a further understanding of the strain effect on the phase separation of Fe-Cr alloys from the metastable region to spinodal regions.
Prediction of Microstructure in HAZ of Welds
NASA Astrophysics Data System (ADS)
Khurana, S. P.; Yancey, R.; Jung, G.
2004-06-01
A modeling technique for predicting microstructure in the heat-affected zone (HAZ) of the hypoeutectoid steels is presented. This technique aims at predicting the phase fractions of ferrite, pearlite, bainite and martensite present in the HAZ after the cool down of a weld. The austenite formation kinetics and austenite decomposition kinetics are calculated using the transient temperature profile. The thermal profile in the weld and the HAZ is calculated by finite-element analysis (FEA). Two kinds of austenite decomposition models are included. The final phase fractions are predicted with the help of a continuous cooling transformation (CCT) diagram of the material. In the calculation of phase fractions either the experimental CCT diagram or the mathematically calculated CCT diagram can be used.
NASA Astrophysics Data System (ADS)
Wang, Xiaochen; Shao, Yun; Tian, Wei; Li, Kun
2018-06-01
This study explored different methodologies using a C-band RADARSAT-2 quad-polarized Synthetic Aperture Radar (SAR) image located over China's Yellow Sea to investigate polarization decomposition parameters for identifying mixed floating pollutants from a complex ocean background. It was found that solitary polarization decomposition did not meet the demand for detecting and classifying multiple floating pollutants, even after applying a polarized SAR image. Furthermore, considering that Yamaguchi decomposition is sensitive to vegetation and the algal variety Enteromorpha prolifera, while H/A/alpha decomposition is sensitive to oil spills, a combination of parameters which was deduced from these two decompositions was proposed for marine environmental monitoring of mixed floating sea surface pollutants. A combination of volume scattering, surface scattering, and scattering entropy was the best indicator for classifying mixed floating pollutants from a complex ocean background. The Kappa coefficients for Enteromorpha prolifera and oil spills were 0.7514 and 0.8470, respectively, evidence that the composite polarized parameters based on quad-polarized SAR imagery proposed in this research is an effective monitoring method for complex marine pollution.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
A practical material decomposition method for x-ray dual spectral computed tomography.
Hu, Jingjing; Zhao, Xing
2016-03-17
X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.
Estimating the decomposition of predictive information in multivariate systems
NASA Astrophysics Data System (ADS)
Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele
2015-03-01
In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.
2011-01-01
Background Singular value decomposition (SVD) is a powerful technique for information retrieval; it helps uncover relationships between elements that are not prima facie related. SVD was initially developed to reduce the time needed for information retrieval and analysis of very large data sets in the complex internet environment. Since information retrieval from large-scale genome and proteome data sets has a similar level of complexity, SVD-based methods could also facilitate data analysis in this research area. Results We found that SVD applied to amino acid sequences demonstrates relationships and provides a basis for producing clusters and cladograms, demonstrating evolutionary relatedness of species that correlates well with Linnaean taxonomy. The choice of a reasonable number of singular values is crucial for SVD-based studies. We found that fewer singular values are needed to produce biologically significant clusters when SVD is employed. Subsequently, we developed a method to determine the lowest number of singular values and fewest clusters needed to guarantee biological significance; this system was developed and validated by comparison with Linnaean taxonomic classification. Conclusions By using SVD, we can reduce uncertainty concerning the appropriate rank value necessary to perform accurate information retrieval analyses. In tests, clusters that we developed with SVD perfectly matched what was expected based on Linnaean taxonomy. PMID:22369633
Gas Evolution in Operating Lithium-Ion Batteries Studied In Situ by Neutron Imaging
Michalak, Barbara; Sommer, Heino; Mannes, David; Kaestner, Anders; Brezesinski, Torsten; Janek, Jürgen
2015-01-01
Gas generation as a result of electrolyte decomposition is one of the major issues of high-performance rechargeable batteries. Here, we report the direct observation of gassing in operating lithium-ion batteries using neutron imaging. This technique can be used to obtain qualitative as well as quantitative information by applying a new analysis approach. Special emphasis is placed on high voltage LiNi0.5Mn1.5O4/graphite pouch cells. Continuous gassing due to oxidation and reduction of electrolyte solvents is observed. To separate gas evolution reactions occurring on the anode from those associated with the cathode interface and to gain more insight into the gassing behavior of LiNi0.5Mn1.5O4/graphite cells, neutron experiments were also conducted systematically on other cathode/anode combinations, including LiFePO4/graphite, LiNi0.5Mn1.5O4/Li4Ti5O12 and LiFePO4/Li4Ti5O12. In addition, the data were supported by gas pressure measurements. The results suggest that metal dissolution in the electrolyte and decomposition products resulting from the high potentials adversely affect the gas generation, particularly in the first charge cycle (i.e., during graphite solid-electrolyte interface layer formation). PMID:26496823
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.
Bian, Xihui; Li, Shujuan; Lin, Ligang; Tan, Xiaoyao; Fan, Qingjie; Li, Ming
2016-06-21
Accurate prediction of the model is fundamental to the successful analysis of complex samples. To utilize abundant information embedded over frequency and time domains, a novel regression model is presented for quantitative analysis of hydrocarbon contents in the fuel oil samples. The proposed method named as high and low frequency unfolded PLSR (HLUPLSR), which integrates empirical mode decomposition (EMD) and unfolded strategy with partial least squares regression (PLSR). In the proposed method, the original signals are firstly decomposed into a finite number of intrinsic mode functions (IMFs) and a residue by EMD. Secondly, the former high frequency IMFs are summed as a high frequency matrix and the latter IMFs and residue are summed as a low frequency matrix. Finally, the two matrices are unfolded to an extended matrix in variable dimension, and then the PLSR model is built between the extended matrix and the target values. Coupled with Ultraviolet (UV) spectroscopy, HLUPLSR has been applied to determine hydrocarbon contents of light gas oil and diesel fuels samples. Comparing with single PLSR and other signal processing techniques, the proposed method shows superiority in prediction ability and better model interpretation. Therefore, HLUPLSR method provides a promising tool for quantitative analysis of complex samples. Copyright © 2016 Elsevier B.V. All rights reserved.
DNA and RNA profiling of excavated human remains with varying postmortem intervals.
van den Berge, M; Wiskerke, D; Gerretsen, R R R; Tabak, J; Sijen, T
2016-11-01
When postmortem intervals (PMIs) increase such as with longer burial times, human remains suffer increasingly from the taphonomic effects of decomposition processes such as autolysis and putrefaction. In this study, various DNA analysis techniques and a messenger RNA (mRNA) profiling method were applied to examine for trends in nucleic acid degradation and the postmortem interval. The DNA analysis techniques include highly sensitive DNA quantitation (with and without degradation index), standard and low template STR profiling, insertion and null alleles (INNUL) of retrotransposable elements typing and mitochondrial DNA profiling. The used mRNA profiling system targets genes with tissue specific expression for seven human organs as reported by Lindenbergh et al. (Int J Legal Med 127:891-900, 27) and has been applied to forensic evidentiary traces but not to excavated tissues. The techniques were applied to a total of 81 brain, lung, liver, skeletal muscle, heart, kidney and skin samples obtained from 19 excavated graves with burial times ranging from 4 to 42 years. Results show that brain and heart are the organs in which both DNA and RNA remain remarkably stable, notwithstanding long PMIs. The other organ tissues either show poor overall profiling results or vary for DNA and RNA profiling success, with sometimes DNA and other times RNA profiling being more successful. No straightforward relations were observed between nucleic acid profiling results and the PMI. This study shows that not only DNA but also RNA molecules can be remarkably stable and used for profiling of long-buried human remains, which corroborate forensic applications. The insight that the brain and heart tissues tend to provide the best profiling results may change sampling policies in identification cases of degrading cadavers.
NASA Astrophysics Data System (ADS)
Muñoz-Rojas, Miriam; Luna-Ramos, Lourdes; Oyonarte, Cecilio; Sole Benet, Albert
2017-04-01
Water availability plays a fundamental part in controlling biotic processes in arid ecosystems. However, recent evidence suggests that other decisive drivers take part in these processes. Despite low annual rainfall and microbial activity, unexplained high rates of litter decomposition, net nitrogen mineralization, soil enzymatic activity and carbon turnover have been observed in arid ecosystems. These observations have been partly explained by photodegradation, a process that consists of the breakdown of organic matter via solar radiation (UV) and that can increase decomposition rates and lead to changes in the balance of carbon and nutrients between plants, soil and atmosphere. A complete understanding of these mechanisms and its drivers in arid ecosystems remains a critical challenge for the scientific community at the global level. In this research, we conducted a multi-site field experiment to test the effects of photodegradation on decomposition of organic amendments used in ecosystem restoration. The study was carried out during 12 months in two study areas: the Pilbara region in Western Australia (Southern Hemisphere) and the Cabo de Gata Nijar Natural Park, South Spain (Northern Hemisphere). In both sites, four treatments were applied in replicated plots (1x1 m, n=4) that included a control (C) with no soil amendment; organic amendment covering the soil surface (AS); organic amendment incorporated into the soil (AI); and a combination of both techniques, both covering the surface and incorporated into the soil (AS-AI). Different organic amendments (native mulch versus compost) and soil substrates were used at each site according to local practices, but in both sites these were applied to increase soil organic matter up to 2%. At the two locations, a radiometer and a logger with a soil temperature and soil moisture probe were installed to monitor UV radiation and soil conditions for the duration of the trial. Soil microbial activity, soil CO2 efflux, and the organic matter fractions (including total OC and hydro-soluble C) were measured repeatedly during the experiment. At the end of the experiment, levels of the soluble fraction of C, soil CO2 efflux and soil microbial activity were significantly (p< 0.05) higher in those plots amended in the surface in both sites. These increases in the surface reflect a fast C decomposing process that can be directly related to UV radiation, evidencing the critical role of photodegradation on the decomposition of the organic matter. These processes can be critical at global scales as they can contribute to forcing biogechemical cycles; however, responses will vary depending on the type of the substrate and organic amendment.
NASA Astrophysics Data System (ADS)
Abrokwah, K.; O'Reilly, A. M.
2017-12-01
Groundwater is an important resource that is extracted every day because of its invaluable use for domestic, industrial and agricultural purposes. The need for sustaining groundwater resources is clearly indicated by declining water levels and has led to modeling and forecasting accurate groundwater levels. In this study, spectral decomposition of climatic forcing time series was used to develop hybrid wavelet analysis (WA) and moving window average (MWA) artificial neural network (ANN) models. These techniques are explored by modeling historical groundwater levels in order to provide understanding of potential causes of the observed groundwater-level fluctuations. Selection of the appropriate decomposition level for WA and window size for MWA helps in understanding the important time scales of climatic forcing, such as rainfall, that influence water levels. Discrete wavelet transform (DWT) is used to decompose the input time-series data into various levels of approximate and details wavelet coefficients, whilst MWA acts as a low-pass signal-filtering technique for removing high-frequency signals from the input data. The variables used to develop and validate the models were daily average rainfall measurements from five National Atmospheric and Oceanic Administration (NOAA) weather stations and daily water-level measurements from two wells recorded from 1978 to 2008 in central Florida, USA. Using different decomposition levels and different window sizes, several WA-ANN and MWA-ANN models for simulating the water levels were created and their relative performances compared against each other. The WA-ANN models performed better than the corresponding MWA-ANN models; also higher decomposition levels of the input signal by the DWT gave the best results. The results obtained show the applicability and feasibility of hybrid WA-ANN and MWA-ANN models for simulating daily water levels using only climatic forcing time series as model inputs.
siGnum: graphical user interface for EMG signal analysis.
Kaur, Manvinder; Mathur, Shilpi; Bhatia, Dinesh; Verma, Suresh
2015-01-01
Electromyography (EMG) signals that represent the electrical activity of muscles can be used for various clinical and biomedical applications. These are complicated and highly varying signals that are dependent on anatomical location and physiological properties of the muscles. EMG signals acquired from the muscles require advanced methods for detection, decomposition and processing. This paper proposes a novel Graphical User Interface (GUI) siGnum developed in MATLAB that will apply efficient and effective techniques on processing of the raw EMG signals and decompose it in a simpler manner. It could be used independent of MATLAB software by employing a deploy tool. This would enable researcher's to gain good understanding of EMG signal and its analysis procedures that can be utilized for more powerful, flexible and efficient applications in near future.
Thermal analysis applied to irradiated propolis
NASA Astrophysics Data System (ADS)
Matsuda, Andrea Harumi; Machado, Luci Brocardo; del Mastro, Nélida Lucia
2002-03-01
Propolis is a resinous hive product, collected by bees. Raw propolis requires a decontamination procedure and irradiation appears as a promising technique for this purpose. The valuable properties of propolis for food and pharmaceutical industries have led to increasing interest in its technological behavior. Thermal analysis is a chemical analysis that gives information about changes on heating of great importance for technological applications. Ground propolis samples were 60Co gamma irradiated with 0 and 10 kGy. Thermogravimetry curves shown a similar multi-stage decomposition pattern for both irradiated and unirradiated samples up to 600°C. Similarly, through differential scanning calorimetry , a coincidence of melting point of irradiated and unirradiated samples was found. The results suggest that the irradiation process do not interfere on the thermal properties of propolis when irradiated up to 10 kGy.
A roadmap for bridging basic and applied research in forensic entomology.
Tomberlin, J K; Mohr, R; Benbow, M E; Tarone, A M; VanLaerhoven, S
2011-01-01
The National Research Council issued a report in 2009 that heavily criticized the forensic sciences. The report made several recommendations that if addressed would allow the forensic sciences to develop a stronger scientific foundation. We suggest a roadmap for decomposition ecology and forensic entomology hinging on a framework built on basic research concepts in ecology, evolution, and genetics. Unifying both basic and applied research fields under a common umbrella of terminology and structure would facilitate communication in the field and the production of scientific results. It would also help to identify novel research areas leading to a better understanding of principal underpinnings governing ecosystem structure, function, and evolution while increasing the accuracy of and ability to interpret entomological evidence collected from crime scenes. By following the proposed roadmap, a bridge can be built between basic and applied decomposition ecology research, culminating in science that could withstand the rigors of emerging legal and cultural expectations.
Senroy, Nilanjan [New Delhi, IN; Suryanarayanan, Siddharth [Littleton, CO
2011-03-15
A computer-implemented method of signal processing is provided. The method includes generating one or more masking signals based upon a computed Fourier transform of a received signal. The method further includes determining one or more intrinsic mode functions (IMFs) of the received signal by performing a masking-signal-based empirical mode decomposition (EMD) using the at least one masking signal.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
2017-01-01
Soil fluxomics analysis can provide pivotal information for understanding soil biochemical pathways and their regulation, but direct measurement methods are rare. Here, we describe an approach to measure soil extracellular metabolite (amino sugar and amino acid) concentrations and fluxes based on a 15N isotope pool dilution technique via liquid chromatography and high-resolution mass spectrometry. We produced commercially unavailable 15N and 13C labeled amino sugars and amino acids by hydrolyzing peptidoglycan isolated from isotopically labeled bacterial biomass and used them as tracers (15N) and internal standards (13C). High-resolution (Orbitrap Exactive) MS with a resolution of 50 000 allowed us to separate different stable isotope labeled analogues across a large range of metabolites. The utilization of 13C internal standards greatly improved the accuracy and reliability of absolute quantification. We successfully applied this method to two types of soils and quantified the extracellular gross fluxes of 2 amino sugars, 18 amino acids, and 4 amino acid enantiomers. Compared to the influx and efflux rates of most amino acids, similar ones were found for glucosamine, indicating that this amino sugar is released through peptidoglycan and chitin decomposition and serves as an important nitrogen source for soil microorganisms. d-Alanine and d-glutamic acid derived from peptidoglycan decomposition exhibited similar turnover rates as their l-enantiomers. This novel approach offers new strategies to advance our understanding of the production and transformation pathways of soil organic N metabolites, including the unknown contributions of peptidoglycan and chitin decomposition to soil organic N cycling. PMID:28776982
Sparse Tensor Decomposition for Haplotype Assembly of Diploids and Polyploids.
Hashemi, Abolfazl; Zhu, Banghua; Vikalo, Haris
2018-03-21
Haplotype assembly is the task of reconstructing haplotypes of an individual from a mixture of sequenced chromosome fragments. Haplotype information enables studies of the effects of genetic variations on an organism's phenotype. Most of the mathematical formulations of haplotype assembly are known to be NP-hard and haplotype assembly becomes even more challenging as the sequencing technology advances and the length of the paired-end reads and inserts increases. Assembly of haplotypes polyploid organisms is considerably more difficult than in the case of diploids. Hence, scalable and accurate schemes with provable performance are desired for haplotype assembly of both diploid and polyploid organisms. We propose a framework that formulates haplotype assembly from sequencing data as a sparse tensor decomposition. We cast the problem as that of decomposing a tensor having special structural constraints and missing a large fraction of its entries into a product of two factors, U and [Formula: see text]; tensor [Formula: see text] reveals haplotype information while U is a sparse matrix encoding the origin of erroneous sequencing reads. An algorithm, AltHap, which reconstructs haplotypes of either diploid or polyploid organisms by iteratively solving this decomposition problem is proposed. The performance and convergence properties of AltHap are theoretically analyzed and, in doing so, guarantees on the achievable minimum error correction scores and correct phasing rate are established. The developed framework is applicable to diploid, biallelic and polyallelic polyploid species. The code for AltHap is freely available from https://github.com/realabolfazl/AltHap . AltHap was tested in a number of different scenarios and was shown to compare favorably to state-of-the-art methods in applications to haplotype assembly of diploids, and significantly outperforms existing techniques when applied to haplotype assembly of polyploids.
Attention trees and semantic paths
NASA Astrophysics Data System (ADS)
Giusti, Christian; Pieroni, Goffredo G.; Pieroni, Laura
2007-02-01
In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial results will be shown.
López-Rodríguez, Patricia; Escot-Bocanegra, David; Fernández-Recio, Raúl; Bravo, Ignacio
2015-01-01
Radar high resolution range profiles are widely used among the target recognition community for the detection and identification of flying targets. In this paper, singular value decomposition is applied to extract the relevant information and to model each aircraft as a subspace. The identification algorithm is based on angle between subspaces and takes place in a transformed domain. In order to have a wide database of radar signatures and evaluate the performance, simulated range profiles are used as the recognition database while the test samples comprise data of actual range profiles collected in a measurement campaign. Thanks to the modeling of aircraft as subspaces only the valuable information of each target is used in the recognition process. Thus, one of the main advantages of using singular value decomposition, is that it helps to overcome the notable dissimilarities found in the shape and signal-to-noise ratio between actual and simulated profiles due to their difference in nature. Despite these differences, the recognition rates obtained with the algorithm are quite promising. PMID:25551484
NASA Astrophysics Data System (ADS)
Pandey, Anil; Niwa, Syunta; Morii, Yoshinari; Ikezawa, Shunjiro
2012-10-01
In order to decompose CO2 . NOx [1], we have developed the large flow atmospheric microwave plasma; LAMP [2]. It is very important to apply it for industrial innovation, so we have studied to apply the LAMP into motorcar. The characteristics of the developed LAMP are that the price is cheap and the decomposition efficiencies of CO2 . NOx are high. The mechanism was shown as the vertical configuration between the exhaust gas pipe and the waveguide was suitable [2]. The system was set up in the car body with a battery and an inverter. The battery is common between the engine and the inverter. In the application of motorcar, the flow is large, so the LAMP which has the merits of large flow, high efficient decomposition, and cheap apparatus will be superior.[4pt] [1] H. Barankova, L. Bardos, ISSP 2011, Kyoto.[0pt] [2] S. Ikezawa, S. Parajulee, S. Sharma, A. Pandey, ISSP 2011, Kyoto (2011) pp. 28-31; S. Ikezawa, S. Niwa, Y. Morii, JJAP meeting 2012, March 16, Waseda U. (2012).
NASA Astrophysics Data System (ADS)
Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.
2017-12-01
This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.
SOI layout decomposition for double patterning lithography on high-performance computer platforms
NASA Astrophysics Data System (ADS)
Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir
2014-12-01
In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.
Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control
2015-11-10
the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the
Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition
Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac
2013-01-01
Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772
Hybrid spectral CT reconstruction
Clark, Darin P.
2017-01-01
Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral separation on the order of the energy resolution of the PCD hardware. PMID:28683124
[Progress in Raman spectroscopic measurement of methane hydrate].
Xu, Feng; Zhu, Li-hua; Wu, Qiang; Xu, Long-jun
2009-09-01
Complex thermodynamics and kinetics problems are involved in the methane hydrate formation and decomposition, and these problems are crucial to understanding the mechanisms of hydrate formation and hydrate decomposition. However, it was difficult to accurately obtain such information due to the difficulty of measurement since methane hydrate is only stable under low temperature and high pressure condition, and until recent years, methane hydrate has been measured in situ using Raman spectroscopy. Raman spectroscopy, a non-destructive and non-invasive technique, is used to study vibrational modes of molecules. Studies of methane hydrate using Raman spectroscopy have been developed over the last decade. The Raman spectra of CH4 in vapor phase and in hydrate phase are presented in this paper. The progress in the research on methane hydrate formation thermodynamics, formation kinetics, decomposition kinetics and decomposition mechanism based on Raman spectroscopic measurements in the laboratory and deep sea are reviewed. Formation thermodynamic studies, including in situ observation of formation condition of methane hydrate, analysis of structure, and determination of hydrate cage occupancy and hydration numbers by using Raman spectroscopy, are emphasized. In the aspect of formation kinetics, research on variation in hydrate cage amount and methane concentration in water during the growth of hydrate using Raman spectroscopy is also introduced. For the methane hydrate decomposition, the investigation associated with decomposition mechanism, the mutative law of cage occupancy ratio and the formulation of decomposition rate in porous media are described. The important aspects for future hydrate research based on Raman spectroscopy are discussed.
NASA Astrophysics Data System (ADS)
Wang, Lynn T.-N.; Madhavan, Sriram
2018-03-01
A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.
Concrete Condition Assessment Using Impact-Echo Method and Extreme Learning Machines
Zhang, Jing-Kui; Yan, Weizhong; Cui, De-Mi
2016-01-01
The impact-echo (IE) method is a popular non-destructive testing (NDT) technique widely used for measuring the thickness of plate-like structures and for detecting certain defects inside concrete elements or structures. However, the IE method is not effective for full condition assessment (i.e., defect detection, defect diagnosis, defect sizing and location), because the simple frequency spectrum analysis involved in the existing IE method is not sufficient to capture the IE signal patterns associated with different conditions. In this paper, we attempt to enhance the IE technique and enable it for full condition assessment of concrete elements by introducing advanced machine learning techniques for performing comprehensive analysis and pattern recognition of IE signals. Specifically, we use wavelet decomposition for extracting signatures or features out of the raw IE signals and apply extreme learning machine, one of the recently developed machine learning techniques, as classification models for full condition assessment. To validate the capabilities of the proposed method, we build a number of specimens with various types, sizes, and locations of defects and perform IE testing on these specimens in a lab environment. Based on analysis of the collected IE signals using the proposed machine learning based IE method, we demonstrate that the proposed method is effective in performing full condition assessment of concrete elements or structures. PMID:27023563
NASA Astrophysics Data System (ADS)
Kim, Eng-Chan; Cho, Jae-Hwan; Kim, Min-Hye; Kim, Ki-Hong; Choi, Cheon-Woong; Seok, Jong-min; Na, Kil-Ju; Han, Man-Seok
2013-03-01
This study was conducted on 20 patients who had undergone pedicle screw fixation between March and December 2010 to quantitatively compare a conventional fat suppression technique, CHESS (chemical shift selection suppression), and a new technique, IDEAL (iterative decomposition of water and fat with echo asymmetry and least squares estimation). The general efficacy and usefulness of the IDEAL technique was also evaluated. Fat-suppressed transverse-relaxation-weighed images and longitudinal-relaxation-weighted images were obtained before and after contrast injection by using these two techniques with a 1.5T MR (magnetic resonance) scanner. The obtained images were analyzed for image distortion, susceptibility artifacts and homogenous fat removal in the target region. The results showed that the image distortion due to the susceptibility artifacts caused by implanted metal was lower in the images obtained using the IDEAL technique compared to those obtained using the CHESS technique. The results of a qualitative analysis also showed that compared to the CHESS technique, fewer susceptibility artifacts and more homogenous fat removal were found in the images obtained using the IDEAL technique in a comparative image evaluation of the axial plane images before and after contrast injection. In summary, compared to the CHESS technique, the IDEAL technique showed a lower occurrence of susceptibility artifacts caused by metal and lower image distortion. In addition, more homogenous fat removal was shown in the IDEAL technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincenti, H.; Vay, J. -L.
Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less
Understanding neurodynamical systems via Fuzzy Symbolic Dynamics.
Dobosz, Krzysztof; Duch, Włodzisław
2010-05-01
Neurodynamical systems are characterized by a large number of signal streams, measuring activity of individual neurons, local field potentials, aggregated electrical (EEG) or magnetic potentials (MEG), oxygen use (fMRI) or activity of simulated neurons. Various basis set decomposition techniques are used to analyze such signals, trying to discover components that carry meaningful information, but these techniques tell us little about the global activity of the whole system. A novel technique called Fuzzy Symbolic Dynamics (FSD) is introduced to help in understanding of the multidimensional dynamical system's behavior. It is based on a fuzzy partitioning of the signal space that defines a non-linear mapping of the system's trajectory to the low-dimensional space of membership function activations. This allows for visualization of the trajectory showing various aspects of observed signals that may be difficult to discover looking at individual components, or to notice otherwise. FSD mapping can be applied to raw signals, transformed signals (for example, ICA components), or to signals defined in the time-frequency domain. To illustrate the method two FSD visualizations are presented: a model system with artificial radial oscillatory sources, and the output layer (50 neurons) of Respiratory Rhythm Generator (RRG) composed of 300 spiking neurons. 2009 Elsevier Ltd. All rights reserved.
Different techniques of multispectral data analysis for vegetation fraction retrieval
NASA Astrophysics Data System (ADS)
Kancheva, Rumiana; Georgiev, Georgi
2012-07-01
Vegetation monitoring is one of the most important applications of remote sensing technologies. In respect to farmlands, the assessment of crop condition constitutes the basis of growth, development, and yield processes monitoring. Plant condition is defined by a set of biometric variables, such as density, height, biomass amount, leaf area index, and etc. The canopy cover fraction is closely related to these variables, and is state-indicative of the growth process. At the same time it is a defining factor of the soil-vegetation system spectral signatures. That is why spectral mixtures decomposition is a primary objective in remotely sensed data processing and interpretation, specifically in agricultural applications. The actual usefulness of the applied methods depends on their prediction reliability. The goal of this paper is to present and compare different techniques for quantitative endmember extraction from soil-crop patterns reflectance. These techniques include: linear spectral unmixing, two-dimensional spectra analysis, spectral ratio analysis (vegetation indices), spectral derivative analysis (red edge position), colorimetric analysis (tristimulus values sum, chromaticity coordinates and dominant wavelength). The objective is to reveal their potential, accuracy and robustness for plant fraction estimation from multispectral data. Regression relationships have been established between crop canopy cover and various spectral estimators.
Improving the Design of Laboratory Worksheets.
ERIC Educational Resources Information Center
McDowell, E. T.; Wadding, R. E. L.
1985-01-01
Describes a technique to improve the adaptation, development, and utilization of laboratory worksheets. Two sample worksheets (on decomposition of ethanol and on solubility of potassium chloride) are included. (JN)
Influence of gamma-irradiation on the non-isothermal decomposition of calcium-gadolinium oxalate
NASA Astrophysics Data System (ADS)
Moharana, S. C.; Praharaj, J.; Bhatta, D.
Thermal decomposition of co-precipitated unirradiated and irradiated Ca-Gd oxalate has been studied by adopting differential thermal analysis (DTA) and thermogravimetric (TG) techniques. The reaction occurs through two stages corresponding to the decomposition of gadolinium oxalate (Gd-Ox) followed by that of calcium oxalate (Ca-Ox). The kinetic parameters for both the stages are calculated by using solid state reaction models and Coats-Redfern's equation. The co-precipitation as well as irradiation alter the DTA peak temperatures and the kinetic parameters of Ca-Ox. The decomposition of Gd-Ox follows the two dimensional Contracting area (R-2) mechanism, while that of Ca-Ox follows the Avrami-Erofeev (A(2)) mechanism (n =2), which are also exhibited by the co-precipitated and irradiated samples. Co-precipitation decreases the energy of activation and the pre-exponential factor of the individual components but the reverse phenomenon takes place upon irradiation of the co-precipitate. The mechanisms underlying the phenomena are explored.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
NASA Astrophysics Data System (ADS)
Royle, Samuel H.; Montgomery, Wren; Kounaves, Samuel P.; Sephton, Mark A.
2017-12-01
Three Mars missions have analyzed the composition of surface samples using thermal extraction techniques. The temperatures of decomposition have been used as diagnostic information for the materials present. One compound of great current interest is perchlorate, a relatively recently discovered component of Mars' surface geochemistry that leads to deleterious effects on organic matter during thermal extraction. Knowledge of the thermal decomposition behavior of perchlorate salts is essential for mineral identification and possible avoidance of confounding interactions with organic matter. We have performed a series of experiments which reveal that the hydration state of magnesium perchlorate has a significant effect on decomposition temperature, with differing temperature releases of oxygen corresponding to different perchlorate hydration states (peak of O2 release shifts from 500 to 600°C as the proportion of the tetrahydrate form in the sample increases). Changes in crystallinity/crystal size may also have a secondary effect on the temperature of decomposition, and although these surface effects appear to be minor for our samples, further investigation may be warranted. A less than full appreciation of the hydration state of perchlorate salts during thermal extraction analyses could lead to misidentification of the number and the nature of perchlorate phases present.
Baldrian, Petr; López-Mondéjar, Rubén
2014-02-01
Molecular methods for the analysis of biomolecules have undergone rapid technological development in the last decade. The advent of next-generation sequencing methods and improvements in instrumental resolution enabled the analysis of complex transcriptome, proteome and metabolome data, as well as a detailed annotation of microbial genomes. The mechanisms of decomposition by model fungi have been described in unprecedented detail by the combination of genome sequencing, transcriptomics and proteomics. The increasing number of available genomes for fungi and bacteria shows that the genetic potential for decomposition of organic matter is widespread among taxonomically diverse microbial taxa, while expression studies document the importance of the regulation of expression in decomposition efficiency. Importantly, high-throughput methods of nucleic acid analysis used for the analysis of metagenomes and metatranscriptomes indicate the high diversity of decomposer communities in natural habitats and their taxonomic composition. Today, the metaproteomics of natural habitats is of interest. In combination with advanced analytical techniques to explore the products of decomposition and the accumulation of information on the genomes of environmentally relevant microorganisms, advanced methods in microbial ecophysiology should increase our understanding of the complex processes of organic matter transformation.
Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan
2016-07-27
This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1995-01-01
This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1994-01-01
This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.
Accelerated decomposition techniques for large discounted Markov decision processes
NASA Astrophysics Data System (ADS)
Larach, Abdelhadi; Chafik, S.; Daoui, C.
2017-12-01
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
A Market-oriented Approach To Maximizing Product Benefits: Cases in U.S. Forest Products Industries
Vijay S. Reddy; Robert J. Bush; Ronen Roudik
1996-01-01
Conjoint analysis, a decompositional customer preference modelling technique, has seen little application to forest products. However, the technique provides useful information for marketing decisions by quantifying consumer preference functions for multiattribute product alternatives. The results of a conjoint analysis include the contribution of each attribute and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Hariswaran; Grout, Ray
2015-10-30
The load balancing strategies for hybrid solvers that involve grid based partial differential equation solution coupled with particle tracking are presented in this paper. A typical Message Passing Interface (MPI) based parallelization of grid based solves are done using a spatial domain decomposition while particle tracking is primarily done using either of the two techniques. One of the techniques is to distribute the particles to MPI ranks to whose grid they belong to while the other is to share the particles equally among all ranks, irrespective of their spatial location. The former technique provides spatial locality for field interpolation butmore » cannot assure load balance in terms of number of particles, which is achieved by the latter. The two techniques are compared for a case of particle tracking in a homogeneous isotropic turbulence box as well as a turbulent jet case. We performed a strong scaling study for more than 32,000 cores, which results in particle densities representative of anticipated exascale machines. The use of alternative implementations of MPI collectives and efficient load equalization strategies are studied to reduce data communication overheads.« less
Integrative omics analysis. A study based on Plasmodium falciparum mRNA and protein data.
Tomescu, Oana A; Mattanovich, Diethard; Thallinger, Gerhard G
2014-01-01
Technological improvements have shifted the focus from data generation to data analysis. The availability of large amounts of data from transcriptomics, protemics and metabolomics experiments raise new questions concerning suitable integrative analysis methods. We compare three integrative analysis techniques (co-inertia analysis, generalized singular value decomposition and integrative biclustering) by applying them to gene and protein abundance data from the six life cycle stages of Plasmodium falciparum. Co-inertia analysis is an analysis method used to visualize and explore gene and protein data. The generalized singular value decomposition has shown its potential in the analysis of two transcriptome data sets. Integrative Biclustering applies biclustering to gene and protein data. Using CIA, we visualize the six life cycle stages of Plasmodium falciparum, as well as GO terms in a 2D plane and interpret the spatial configuration. With GSVD, we decompose the transcriptomic and proteomic data sets into matrices with biologically meaningful interpretations and explore the processes captured by the data sets. IBC identifies groups of genes, proteins, GO Terms and life cycle stages of Plasmodium falciparum. We show method-specific results as well as a network view of the life cycle stages based on the results common to all three methods. Additionally, by combining the results of the three methods, we create a three-fold validated network of life cycle stage specific GO terms: Sporozoites are associated with transcription and transport; merozoites with entry into host cell as well as biosynthetic and metabolic processes; rings with oxidation-reduction processes; trophozoites with glycolysis and energy production; schizonts with antigenic variation and immune response; gametocyctes with DNA packaging and mitochondrial transport. Furthermore, the network connectivity underlines the separation of the intraerythrocytic cycle from the gametocyte and sporozoite stages. Using integrative analysis techniques, we can integrate knowledge from different levels and obtain a wider view of the system under study. The overlap between method-specific and common results is considerable, even if the basic mathematical assumptions are very different. The three-fold validated network of life cycle stage characteristics of Plasmodium falciparum could identify a large amount of the known associations from literature in only one study.
Integrative omics analysis. A study based on Plasmodium falciparum mRNA and protein data
2014-01-01
Background Technological improvements have shifted the focus from data generation to data analysis. The availability of large amounts of data from transcriptomics, protemics and metabolomics experiments raise new questions concerning suitable integrative analysis methods. We compare three integrative analysis techniques (co-inertia analysis, generalized singular value decomposition and integrative biclustering) by applying them to gene and protein abundance data from the six life cycle stages of Plasmodium falciparum. Co-inertia analysis is an analysis method used to visualize and explore gene and protein data. The generalized singular value decomposition has shown its potential in the analysis of two transcriptome data sets. Integrative Biclustering applies biclustering to gene and protein data. Results Using CIA, we visualize the six life cycle stages of Plasmodium falciparum, as well as GO terms in a 2D plane and interpret the spatial configuration. With GSVD, we decompose the transcriptomic and proteomic data sets into matrices with biologically meaningful interpretations and explore the processes captured by the data sets. IBC identifies groups of genes, proteins, GO Terms and life cycle stages of Plasmodium falciparum. We show method-specific results as well as a network view of the life cycle stages based on the results common to all three methods. Additionally, by combining the results of the three methods, we create a three-fold validated network of life cycle stage specific GO terms: Sporozoites are associated with transcription and transport; merozoites with entry into host cell as well as biosynthetic and metabolic processes; rings with oxidation-reduction processes; trophozoites with glycolysis and energy production; schizonts with antigenic variation and immune response; gametocyctes with DNA packaging and mitochondrial transport. Furthermore, the network connectivity underlines the separation of the intraerythrocytic cycle from the gametocyte and sporozoite stages. Conclusion Using integrative analysis techniques, we can integrate knowledge from different levels and obtain a wider view of the system under study. The overlap between method-specific and common results is considerable, even if the basic mathematical assumptions are very different. The three-fold validated network of life cycle stage characteristics of Plasmodium falciparum could identify a large amount of the known associations from literature in only one study. PMID:25033389
Vergara, Victor M; Ulloa, Alvaro; Calhoun, Vince D; Boutte, David; Chen, Jiayu; Liu, Jingyu
2014-09-01
Multi-modal data analysis techniques, such as the Parallel Independent Component Analysis (pICA), are essential in neuroscience, medical imaging and genetic studies. The pICA algorithm allows the simultaneous decomposition of up to two data modalities achieving better performance than separate ICA decompositions and enabling the discovery of links between modalities. However, advances in data acquisition techniques facilitate the collection of more than two data modalities from each subject. Examples of commonly measured modalities include genetic information, structural magnetic resonance imaging (MRI) and functional MRI. In order to take full advantage of the available data, this work extends the pICA approach to incorporate three modalities in one comprehensive analysis. Simulations demonstrate the three-way pICA performance in identifying pairwise links between modalities and estimating independent components which more closely resemble the true sources than components found by pICA or separate ICA analyses. In addition, the three-way pICA algorithm is applied to real experimental data obtained from a study that investigate genetic effects on alcohol dependence. Considered data modalities include functional MRI (contrast images during alcohol exposure paradigm), gray matter concentration images from structural MRI and genetic single nucleotide polymorphism (SNP). The three-way pICA approach identified links between a SNP component (pointing to brain function and mental disorder associated genes, including BDNF, GRIN2B and NRG1), a functional component related to increased activation in the precuneus area, and a gray matter component comprising part of the default mode network and the caudate. Although such findings need further verification, the simulation and in-vivo results validate the three-way pICA algorithm presented here as a useful tool in biomedical data fusion applications. Copyright © 2014 Elsevier Inc. All rights reserved.
Ge, Ni-Na; Wei, Yong-Kai; Song, Zhen-Fei; Chen, Xiang-Rong; Ji, Guang-Fu; Zhao, Feng; Wei, Dong-Qing
2014-07-24
Molecular dynamics simulations in conjunction with multiscale shock technique (MSST) are performed to study the initial chemical processes and the anisotropy of shock sensitivity of the condensed-phase HMX under shock loadings applied along the a, b, and c lattice vectors. A self-consistent charge density-functional tight-binding (SCC-DFTB) method was employed. Our results show that there is a difference between lattice vector a (or c) and lattice vector b in the response to a shock wave velocity of 11 km/s, which is investigated through reaction temperature and relative sliding rate between adjacent slipping planes. The response along lattice vectors a and c are similar to each other, whose reaction temperature is up to 7000 K, but quite different along lattice vector b, whose reaction temperature is only up to 4000 K. When compared with shock wave propagation along the lattice vectors a (18 Å/ps) and c (21 Å/ps), the relative sliding rate between adjacent slipping planes along lattice vector b is only 0.2 Å/ps. Thus, the small relative sliding rate between adjacent slipping planes results in the temperature and energy under shock loading increasing at a slower rate, which is the main reason leading to less sensitivity under shock wave compression along lattice vector b. In addition, the C-H bond dissociation is the primary pathway for HMX decomposition in early stages under high shock loading from various directions. Compared with the observation for shock velocities V(imp) = 10 and 11 km/s, the homolytic cleavage of N-NO2 bond was obviously suppressed with increasing pressure.
NASA Astrophysics Data System (ADS)
Dumencu, A.; Horbaniuc, B.; Dumitraşcu, G.
2016-08-01
The analytical approach of unsteady conduction heat transfer under actual conditions represent a very difficult (if not insurmountable) problem due to the issues related to finding analytical solutions for the conduction heat transfer equation. Various techniques have been developed in order to overcome these difficulties, among which the alternate directions method and the decomposition method. Both of them are particularly suited for two-dimension heat propagation. The paper deals with both techniques in order to verify whether the results provided are in good accordance. The studied case consists of a long hollow cylinder, and considers that the time-dependent temperature field varies both in the radial and the axial directions. The implicit technique is used in both methods and involves the simultaneous solving of a set of equations for all of the nodes for each time step successively for each of the two directions. Gauss elimination is used to obtain the solution of the set, representing the nodal temperatures. After using the two techniques the results show a very good agreement, and since the decomposition is easier to use in terms of computer code and running time, this technique seems to be more recommendable.
Radiolytic Synthesis of Pt-Particle/ABS Catalysts for H₂O₂ Decomposition in Contact Lens Cleaning.
Ohkubo, Yuji; Aoki, Tomonori; Seino, Satoshi; Mori, Osamu; Ito, Issaku; Endo, Katsuyoshi; Yamamura, Kazuya
2017-08-23
A container used in contact lens cleaning requires a Pt plating weight of 1.5 mg for H₂O₂ decomposition although Pt is an expensive material. Techniques that decrease the amount of Pt are therefore needed. In this study, Pt nanoparticles instead of Pt plating film were supported on a substrate of acrylonitrile-butadiene-styrene copolymer (ABS). This was achieved by the reduction of Pt ions in an aqueous solution containing the ABS substrate using high-energy electron-beam irradiation. Pt nanoparticles supported on the ABS substrate (Pt-particle/ABS) had a size of 4-10 nm. The amount of Pt required for Pt-particle/ABS was 250 times less than that required for an ABS substrate covered with Pt plating film (Pt-film/ABS). The catalytic activity for H₂O₂ decomposition was estimated by measuring the residual H₂O₂ concentration after immersing the catalyst for 360 min. The Pt-particle/ABS catalyst had a considerably higher specific catalytic activity for H₂O₂ decomposition than the Pt-film/ABS catalyst. In addition, sterilization performance was estimated from the initial rate of H₂O₂ decomposition over 60 min. The Pt-particle/ABS catalyst demonstrated a better sterilization performance than the Pt-film/ABS catalyst. The difference between Pt-particle/ABS and Pt-film/ABS was shown to reflect the size of the O₂ bubbles formed during H₂O₂ decomposition.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
NASA Astrophysics Data System (ADS)
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
"Techniques for Teachers" Section
ERIC Educational Resources Information Center
Manchester, P.; Ed.
1976-01-01
Presents articles on: the measurement of flouride levels in natural waters; the effect of sunlight on the rate of chlorine decomposition in a swimming pool; and curriculum development in Australian agriculture science. (MLH)
Paving the way to a full chip gate level double patterning application
NASA Astrophysics Data System (ADS)
Haffner, Henning; Meiring, Jason; Baum, Zachary; Halle, Scott
2007-10-01
Double patterning lithography processes can offer significant yield enhancement for challenging circuit designs. Many decomposition (i.e. the process of dividing the layout design into first and second exposures) techniques are possible, but the focus of this paper is on the use of a secondary "cut" mask to trim away extraneous features left from the first exposure. This approach has the advantage that each exposure only needs to support a subset of critical features (e.g. dense lines with the first exposure, isolated spaces with the second one). The extraneous features ("printing assist features" or PrAFs) are designed to support the process window of critical features much like the role of the subresolution assist features (SRAFs) in conventional processes. However, the printing nature of PrAFs leads to many more design options, and hence a greater process and decomposition parameter exploration space, than are available for SRAFs. A decomposition scheme using PRAFs was developed for a gate level process. A critical driver of the work was to deliver improved across-chip linewidth variation (ACLV) performance versus an optimized single exposure process while providing support for a larger range of critical features. A variety of PRAF techniques were investigated by simulation, with a PrAF scheme similar to standard SRAF rules being chosen as the optimal solution [1]. This paper discusses aspects of the code development for an automated PrAF generation and placement scheme and the subsequent decomposition of a layout into two mask levels. While PrAF placement and decomposition is straightforward for layouts with pitch and orientation restrictions, it becomes rather complex for unrestricted layout styles. Because this higher complexity yields more irregularly shaped PrAFs, mask making becomes another critical driver of the optimum placement and clean-up strategies. Examples are given of how those challenges are met or can be successfully circumvented. During subsequent decomposition of the PrAF-enhanced layout into two independent mask levels, various geometric decomposition parameters have to be considered. As an example, the removal of PrAFs has to be guaranteed by a minimum required overlap of the cut mask opening past any PrAF edge. It is discussed that process assumptions such as CD tolerances and overlay as well as inter-level relationship ground rules need to be considered to successfully optimize the final decomposition scheme. Furthermore, simulation and experimental results regarding not only ACLV but also across-device linewidth variation (ADLV) are analyzed.
Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru
Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the newmore » assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.« less
Hydrogen Peroxide - Material Compatibility Studied by Microcalorimetry
NASA Technical Reports Server (NTRS)
Homung, Steven D.; Davis, Dennis D.; Baker, David; Popp, Christopher G.
2003-01-01
Environmental and toxicity concerns with current hypergolic propellants have led to a renewed interest in propellant grade hydrogen peroxide (HP) for propellant applications. Storability and stability has always been an issue with HP. Contamination or contact of HP with metallic surfaces may cause decomposition, which can result in the evolution of heat and gas leading to increased pressure or thermal hazards. The NASA Johnson Space Center White Sands Test Facility has developed a technique to monitor the decompositions of hydrogen peroxide at temperatures ranging from 25 to 60 C. Using isothermal microcalorimetry we have measured decomposition rates at the picomole/s/g level showing the catalytic effects of materials of construction. In this paper we will present the results of testing with Class 1 and 2 materials in 90 percent hydrogen peroxide.
Stability of chromium (III) sulfate in atmospheres containing oxygen and sulfur
NASA Technical Reports Server (NTRS)
Jacob, K. T.; Rao, B. D.; Nelson, H. G.
1978-01-01
The stability of chromium sulfate in the temperature range from 880 K to 1040 K was determined by employing a dynamic gas-solid equilibration technique. The solid chromium sulfate was equilibrated in a gas stream of controlled SO3 potential. Thermogravimetric and differential thermal analyses were used to follow the decomposition of chromium sulfate. X-ray diffraction analysis indicated that the decomposition product was crystalline Cr2O3 and that the mutual solubility between Cr2(SO4)3 and Cr2O3 was negligible. Over the temperature range investigated, the decomposition pressure were significantly high so that chromium sulfate is not expected to form on commercial alloys containing chromium when exposed to gaseous environments containing oxygen and sulfur (such as those encountered in coal gasification).
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Guo, Feng; Cheng, Xin-lu; Zhang, Hong
2012-04-12
Which is the first step in the decomposition process of nitromethane is a controversial issue, proton dissociation or C-N bond scission. We applied reactive force field (ReaxFF) molecular dynamics to probe the initial decomposition mechanisms of nitromethane. By comparing the impact on (010) surfaces and without impact (only heating) for nitromethane simulations, we found that proton dissociation is the first step of the pyrolysis of nitromethane, and the C-N bond decomposes in the same time scale as in impact simulations, but in the nonimpact simulation, C-N bond dissociation takes place at a later time. At the end of these simulations, a large number of clusters are formed. By analyzing the trajectories, we discussed the role of the hydrogen bond in the initial process of nitromethane decompositions, the intermediates observed in the early time of the simulations, and the formation of clusters that consisted of C-N-C-N chain/ring structures.
Lin, Mu-Chien; Kao, Jui-Chung
2016-04-15
Bioremediation is currently extensively employed in the elimination of coastal oil pollution, but it is not very effective as the process takes several months to degrade oil. Among the components of oil, benzene degradation is difficult due to its stable characteristics. This paper describes an experimental study on the decomposition of benzene by titanium dioxide (TiO2) nanometer photocatalysis. The photocatalyst is illuminated with 360-nm ultraviolet light for generation of peroxide ions. This results in complete decomposition of benzene, thus yielding CO2 and H2O. In this study, a nonwoven fabric is coated with the photocatalyst and benzene. Using the Double-Shot Py-GC system on the residual component, complete decomposition of the benzene was verified by 4h of exposure to ultraviolet light. The method proposed in this study can be directly applied to elimination of marine oil pollution. Further studies will be conducted on coastal oil pollution in situ. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang
2018-03-01
In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.
Rotational-path decomposition based recursive planning for spacecraft attitude reorientation
NASA Astrophysics Data System (ADS)
Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying
2018-02-01
The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.
Layout decomposition of self-aligned double patterning for 2D random logic patterning
NASA Astrophysics Data System (ADS)
Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.
2011-04-01
Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.
van Daalen, Marjolijn A; de Kat, Dorothée S; Oude Grotebevelsborg, Bernice F L; de Leeuwe, Roosje; Warnaar, Jeroen; Oostra, Roelof Jan; M Duijst-Heesters, Wilma L J
2017-03-01
This study aimed to develop an aquatic decomposition scoring (ADS) method and investigated the predictive value of this method in estimating the postmortem submersion interval (PMSI) of bodies recovered from the North Sea. This method, consisting of an ADS item list and a pictorial reference atlas, showed a high interobserver agreement (Krippendorff's alpha ≥ 0.93) and hence proved to be valid. This scoring method was applied to data, collected from closed cases-cases in which the postmortal submersion interval (PMSI) was known-concerning bodies recovered from the North Sea from 1990 to 2013. Thirty-eight cases met the inclusion criteria and were scored by quantifying the observed total aquatic decomposition score (TADS). Statistical analysis demonstrated that TADS accurately predicts the PMSI (p < 0.001), confirming that the decomposition process in the North Sea is strongly correlated to time. © 2017 American Academy of Forensic Sciences.
Pernin, Céline; Cortet, Jérôme; Joffre, Richard; Le Petit, Jean; Torre, Franck
2006-01-01
Effects of sewage sludge on litter mesofauna communities (Collembola and Acari) and cork oak (Quercus suber L.) leaf litter decomposition have been studied during 18 mo using litterbags in an in situ experimental forest firebreak in southeastern France. The sludge (2.74 t DM ha(-1) yr(-1)) was applied to fertilize and maintain a pasture created on the firebreak. Litterbag colonization had similar dynamics on both the control and fertilized plots and followed a typical Mediterranean pattern showing a greater abundance in spring and autumn and a lower abundance in summer. After 9 mo of litter colonization, Collembola and Acari, but mainly Oribatida, were more abundant on the sludge-fertilized plot. Leaf litter decomposition showed a similar pattern on both plots, but it was faster on the control plot. Furthermore, leaves from the fertilized plot were characterized by greater nitrogen content. Both chemical composition of leaves and sludges and the decomposition state of leaves have significantly affected the mesofauna community composition from each plot.
Singer, S S
1985-08-01
(Hydroxyalkyl)nitrosoureas and the related cyclic carbamates N-nitrosooxazolidones are potent carcinogens. The decompositions of four such compounds, 1-nitroso-1-(2-hydroxyethyl)urea (I), 3-nitrosooxazolid-2-one (II), 1-nitroso-1-(2-hydroxypropyl)urea (III), and 5-methyl-3-nitrosooxazolid-2-one (IV), in aqueous buffers at physiological pH were studied to determine if any obvious differences in decomposition pathways could account for the variety of tumors obtained from these four compounds. The products predicted by the literature mechanisms for nitrosourea and nitrosooxazolidone decompositions (which were derived from experiments at pH 10-12) were indeed the products formed, including glycols, active carbonyl compounds, epoxides, and, from the oxazolidones, cyclic carbonates. Furthermore, it was shown that in pH 6.4-7.4 buffer epoxides were stable reaction products. However, in the presence of hepatocytes, most of the epoxide was converted to glycol. The analytical methods developed were then applied to the analysis of the decomposition products of some related dialkylnitrosoureas, and similar results were obtained. The formation of chemically reactive secondary products and the possible relevance of these results to carcinogenesis studies are discussed.
Liu, Zhichao; Wu, Qiong; Zhu, Weihua; Xiao, Heming
2015-04-28
Density functional theory with dispersion-correction (DFT-D) was employed to study the effects of vacancy and pressure on the structure and initial decomposition of crystalline 5-nitro-2,4-dihydro-3H-1,2,4-triazol-3-one (β-NTO), a high-energy insensitive explosive. A comparative analysis of the chemical behaviors of NTO in the ideal bulk crystal and vacancy-containing crystals under applied hydrostatic compression was considered. Our calculated formation energy, vacancy interaction energy, electron density difference, and frontier orbitals reveal that the stability of NTO can be effectively manipulated by changing the molecular environment. Bimolecular hydrogen transfer is suggested to be a potential initial chemical reaction in the vacancy-containing NTO solid at 50 GPa, which is prior to the C-NO2 bond dissociation as its initiation decomposition in the gas phase. The vacancy defects introduced into the ideal bulk NTO crystal can produce a localized site, where the initiation decomposition is preferentially accelerated and then promotes further decompositions. Our results may shed some light on the influence of the molecular environments on the initial pathways in molecular explosives.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Spinodal Decomposition in Functionally Graded Super Duplex Stainless Steel and Weld Metal
NASA Astrophysics Data System (ADS)
Hosseini, Vahid A.; Thuvander, Mattias; Wessman, Sten; Karlsson, Leif
2018-07-01
Low-temperature phase separations (T < 500 °C), resulting in changes in mechanical and corrosion properties, of super duplex stainless steel (SDSS) base and weld metals were investigated for short heat treatment times (0.5 to 600 minutes). A novel heat treatment technique, where a stationary arc produces a steady state temperature gradient for selected times, was employed to fabricate functionally graded materials. Three different initial material conditions including 2507 SDSS, remelted 2507 SDSS, and 2509 SDSS weld metal were investigated. Selective etching of ferrite significantly decreased in regions heat treated at 435 °C to 480 °C already after 3 minutes due to rapid phase separations. Atom probe tomography results revealed spinodal decomposition of ferrite and precipitation of Cu particles. Microhardness mapping showed that as-welded microstructure and/or higher Ni content accelerated decomposition. The arc heat treatment technique combined with microhardness mapping and electrolytical etching was found to be a successful approach to evaluate kinetics of low-temperature phase separations in SDSS, particularly at its earlier stages. A time-temperature transformation diagram was proposed showing the kinetics of 475 °C-embrittlement in 2507 SDSS.
Overlapping Community Detection based on Network Decomposition
NASA Astrophysics Data System (ADS)
Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin
2016-04-01
Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.
Spinodal Decomposition in Functionally Graded Super Duplex Stainless Steel and Weld Metal
NASA Astrophysics Data System (ADS)
Hosseini, Vahid A.; Thuvander, Mattias; Wessman, Sten; Karlsson, Leif
2018-04-01
Low-temperature phase separations (T < 500 °C), resulting in changes in mechanical and corrosion properties, of super duplex stainless steel (SDSS) base and weld metals were investigated for short heat treatment times (0.5 to 600 minutes). A novel heat treatment technique, where a stationary arc produces a steady state temperature gradient for selected times, was employed to fabricate functionally graded materials. Three different initial material conditions including 2507 SDSS, remelted 2507 SDSS, and 2509 SDSS weld metal were investigated. Selective etching of ferrite significantly decreased in regions heat treated at 435 °C to 480 °C already after 3 minutes due to rapid phase separations. Atom probe tomography results revealed spinodal decomposition of ferrite and precipitation of Cu particles. Microhardness mapping showed that as-welded microstructure and/or higher Ni content accelerated decomposition. The arc heat treatment technique combined with microhardness mapping and electrolytical etching was found to be a successful approach to evaluate kinetics of low-temperature phase separations in SDSS, particularly at its earlier stages. A time-temperature transformation diagram was proposed showing the kinetics of 475 °C-embrittlement in 2507 SDSS.
Performance Comparison of Superresolution Array Processing Algorithms. Revised
1998-06-15
plane waves is finite is the MUSIC algorithm [16]. MUSIC , which denotes Multiple Signal Classification, is an extension of the method of Pisarenko [18... MUSIC Is but one member of a class of methods based upon the decomposition of covariance data into eigenvectors and eigenvalues. Such techniques...techniques relative to the classical methods, however, results for MUSIC are included in this report. All of the techniques reviewed have application to
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
NASA Astrophysics Data System (ADS)
Crockett, R. G. M.; Gillmore, G. K.
2009-04-01
During the second half of 2002, the University of Northampton Radon Research Group operated two continuous hourly-sampling radon detectors 2.25 km apart in Northampton, in the (English) East Midlands. This period included the Dudley earthquake (22/09/2002) which was widely noticed by members of the public in the Northampton area. Also, at various periods during 2008 the Group has operated another pair of continuous hourly-sampling radon detectors similar distances apart in Northampton. One such period included the Market Rasen earthquake (27/02/2008) which was also widely noticed by members of the public in the Northampton area. During each period of monitoring, two time-series of radon readings were obtained, one from each detector. These have been analysed for evidence of simultaneous similar anomalies: the premise being that big disturbances occurring at big distances (in relation to the detector separation) should produce simultaneous similar anomalies but that simultaneous anomalies occurring by chance will be dissimilar. As previously reported, cross-correlating the two 2002 time-series over periods of 1-30 days duration, rolled forwards through the time-series at one-hour intervals produced two periods of significant correlation, i.e. two periods of simultaneous similar behaviour in the radon concentrations. One of these periods corresponded in time to the Dudley earthquake, the other corresponded in time to a smaller earthquake which occurred in the English Channel (26/08/2002). We here report subsequent investigation of the 2002 time-series and the 2008 time-series using spectral-decomposition techniques. These techniques have revealed additional simultaneous similar behaviour in the two radon concentrations, not revealed by the rolling correlation on the raw data. These correspond in time to the Manchester earthquake swarm of October 2002 and the Market Rasen earthquake of February 2008. The spectral-decomposition techniques effectively ‘de-noise' the data, and also remove lower-frequency variations (e.g. tidal variations), revealing the simultaneous similarities. Whilst this is very much work in progress, there is the potential that such techniques enhance the possibility that simultaneous real-time monitoring of radon levels - for short-term simultaneous anomalies - at several locations in earthquake areas might provide the core of an earthquake prediction method. Keywords: Radon; earthquakes; time series; cross-correlation; spectral-decomposition; real-time simultaneous monitoring.
Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941
Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.
Liorni, I; Parazzini, M; Fiocchi, S; Guadagnin, V; Ravazzani, P
2014-01-01
Polynomial Chaos (PC) is a decomposition method used to build a meta-model, which approximates the unknown response of a model. In this paper the PC method is applied to the stochastic dosimetry to assess the variability of human exposure due to the change of the orientation of the B-field vector respect to the human body. In detail, the analysis of the pregnant woman exposure at 7 months of gestational age is carried out, to build-up a statistical meta-model of the induced electric field for each fetal tissue and in the fetal whole-body by means of the PC expansion as a function of the B-field orientation, considering a uniform exposure at 50 Hz.
Chemistry That Applies. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
"Chemistry That Applies" is an instructional unit designed to help students in grades 8-10 understand the law of conservation of matter. It consists of 24 lessons organized in four clusters. Working in groups, students explore four chemical reactions: burning, rusting, the decomposition of water, and the reaction of baking soda and…
NASREN: Standard reference model for telerobot control
NASA Technical Reports Server (NTRS)
Albus, J. S.; Lumia, R.; Mccain, H.
1987-01-01
A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.
Sugiyama, Kazuo; Suzuki, Katsunori; Kuwasima, Shusuke; Aoki, Yosuke; Yajima, Tatsuhiko
2009-01-01
The decomposition of a poly(amide-imide) thin film coated on a solid copper wire was attempted using atmospheric pressure non-equilibrium plasma. The plasma was produced by applying microwave power to an electrically conductive material in a gas mixture of argon, oxygen, and hydrogen. The poly(amide-imide) thin film was easily decomposed by argon-oxygen mixed gas plasma and an oxidized copper surface was obtained. The reduction of the oxidized surface with argon-hydrogen mixed gas plasma rapidly yielded a metallic copper surface. A continuous plasma heat-treatment process using a combination of both the argon-oxygen plasma and argon-hydrogen plasma was found to be suitable for the decomposition of the poly(amide-imide) thin film coated on the solid copper wire.