NASA Astrophysics Data System (ADS)
Sridhar, J.
2015-12-01
The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...
Mechanism of thermal decomposition of K2FeO4 and BaFeO4: A review
NASA Astrophysics Data System (ADS)
Sharma, Virender K.; Machala, Libor
2016-12-01
This paper presents thermal decomposition of potassium ferrate(VI) (K2FeO4) and barium ferrate(VI) (BaFeO4) in air and nitrogen atmosphere. Mössbauer spectroscopy and nuclear forward scattering (NFS) synchrotron radiation approaches are reviewed to advance understanding of electron-transfer processes involved in reduction of ferrate(VI) to Fe(III) phases. Direct evidences of Fe V and Fe IV as intermediate iron species using the applied techniques are given. Thermal decomposition of K2FeO4 involved Fe V, Fe IV, and K3FeO3 as intermediate species while BaFeO3 (i.e. Fe IV) was the only intermediate species during the decomposition of BaFeO4. Nature of ferrite species, formed as final Fe(III) species, of thermal decomposition of K2FeO4 and BaFeO4 under different conditions are evaluated. Steps of the mechanisms of thermal decomposition of ferrate(VI), which reasonably explained experimental observations of applied approaches in conjunction with thermal and surface techniques, are summarized.
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
NASA Astrophysics Data System (ADS)
Finn, Conor; Lizier, Joseph
2018-04-01
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.
Differential Decomposition Among Pig, Rabbit, and Human Remains.
Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe
2018-03-30
While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.
Cyclic Mario worlds — color-decomposition for one-loop QCD
NASA Astrophysics Data System (ADS)
Kälin, Gregor
2018-04-01
We present a new color decomposition for QCD amplitudes at one-loop level as a generalization of the Del Duca-Dixon-Maltoni and Johansson-Ochirov decomposition at tree level. Starting from a minimal basis of planar primitive amplitudes we write down a color decomposition that is free of linear dependencies among appearing primitive amplitudes or color factors. The conjectured decomposition applies to any number of quark flavors and is independent of the choice of gauge group and matter representation. The results also hold for higher-dimensional or supersymmetric extensions of QCD. We provide expressions for any number of external quark-antiquark pairs and gluons. [Figure not available: see fulltext.
Lamb Waves Decomposition and Mode Identification Using Matching Pursuit Method
2009-01-01
Wigner - Ville distribution ( WVD ). However, WVD suffers from severe interferences, called cross-terms. Cross- terms are the area of a time-frequency...transform (STFT), wavelet transform, Wigner - Ville distribution , matching pursuit decomposition, etc. 1 Report Documentation Page Form ApprovedOMB No...MP decomposition using chirplet dictionary was applied to a simulated S0 mode Lamb wave shown previously in Figure 2a. Wigner - Ville distribution of
Keough, Natalie; Myburgh, Jolandie; Steyn, Maryna
2017-07-01
Decomposition studies often use pigs as proxies for human cadavers. However, differences in decomposition sequences/rates relative to humans have not been scientifically examined. Descriptions of five main decomposition stages (humans) were developed and refined by Galloway and later by Megyesi. However, whether these changes/processes are alike in pigs is unclear. Any differences can have significant effects when pig models are used for human PMI estimation. This study compared human decomposition models to the changes observed in pigs. Twenty pigs (50-90 kg) were decomposed over five months and decompositional features recorded. Total body scores (TBS) were calculated. Significant differences were observed during early decomposition between pigs and humans. An amended scoring system to be used in future studies was developed. Standards for PMI estimation derived from porcine models may not directly apply to humans and may need adjustment. Porcine models, however, remain valuable to study variables influencing decomposition. © 2016 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Sierra, Carlos A.; Trumbore, Susan E.; Davidson, Eric A.; Vicca, Sara; Janssens, I.
2015-03-01
The sensitivity of soil organic matter decomposition to global environmental change is a topic of prominent relevance for the global carbon cycle. Decomposition depends on multiple factors that are being altered simultaneously as a result of global environmental change; therefore, it is important to study the sensitivity of the rates of soil organic matter decomposition with respect to multiple and interacting drivers. In this manuscript, we present an analysis of the potential response of decomposition rates to simultaneous changes in temperature and moisture. To address this problem, we first present a theoretical framework to study the sensitivity of soil organic matter decomposition when multiple driving factors change simultaneously. We then apply this framework to models and data at different levels of abstraction: (1) to a mechanistic model that addresses the limitation of enzyme activity by simultaneous effects of temperature and soil water content, the latter controlling substrate supply and oxygen concentration for microbial activity; (2) to different mathematical functions used to represent temperature and moisture effects on decomposition in biogeochemical models. To contrast model predictions at these two levels of organization, we compiled different data sets of observed responses in field and laboratory studies. Then we applied our conceptual framework to: (3) observations of heterotrophic respiration at the ecosystem level; (4) laboratory experiments looking at the response of heterotrophic respiration to independent changes in moisture and temperature; and (5) ecosystem-level experiments manipulating soil temperature and water content simultaneously.
A knowledge-based tool for multilevel decomposition of a complex design problem
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.
New monitoring by thermogravimetry for radiation degradation of EVA
NASA Astrophysics Data System (ADS)
Boguski, J.; Przybytniak, G.; Łyczko, K.
2014-07-01
The radiation ageing of ethylene vinyl-acetate copolymer (EVA) as the jacket of cable applied in nuclear power plant was carried out by gamma rays irradiation, and the degradation was monitored by a thermo-gravimetric analysis (TGA). The EVA decomposition rate in air by the isothermal at 400 °C decreased with increase of dose and also with decrease of the dose rate. The behavior of EVA jacket of cable indicated that the decomposition rate at 400 °C was reduced with increase of oxidation. The elongation at break by tensile test for the radiation aged EVA was closely related to the decomposition rate at 400 °C; therefore, the TGA might be applied for a diagnostic technique of the cable degradation.
Constrained reduced-order models based on proper orthogonal decomposition
Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...
2017-04-09
A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.
TE/TM decomposition of electromagnetic sources
NASA Technical Reports Server (NTRS)
Lindell, Ismo V.
1988-01-01
Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.
Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.
2009-01-01
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727
Bahri, A; Bendersky, M; Cohen, F R; Gitler, S
2009-07-28
This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.
Students' Understanding of Quadratic Equations
ERIC Educational Resources Information Center
López, Jonathan; Robles, Izraim; Martínez-Planell, Rafael
2016-01-01
Action-Process-Object-Schema theory (APOS) was applied to study student understanding of quadratic equations in one variable. This required proposing a detailed conjecture (called a genetic decomposition) of mental constructions students may do to understand quadratic equations. The genetic decomposition which was proposed can contribute to help…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallagher, Neal B.; Blake, Thomas A.; Gassman, Paul L.
2006-07-01
Multivariate curve resolution (MCR) is a powerful technique for extracting chemical information from measured spectra on complex mixtures. The difficulty with applying MCR to soil reflectance measurements is that light scattering artifacts can contribute much more variance to the measurements than the analyte(s) of interest. Two methods were integrated into a MCR decomposition to account for light scattering effects. Firstly, an extended mixture model using pure analyte spectra augmented with scattering ‘spectra’ was used for the measured spectra. And secondly, second derivative preprocessed spectra, which have higher selectivity than the unprocessed spectra, were included in a second block as amore » part of the decomposition. The conventional alternating least squares (ALS) algorithm was modified to simultaneously decompose the measured and second derivative spectra in a two-block decomposition. Equality constraints were also included to incorporate information about sampling conditions. The result was an MCR decomposition that provided interpretable spectra from soil reflectance measurements.« less
NASA Astrophysics Data System (ADS)
Othman, Adel A. A.; Fathy, M.; Negm, Adel
2018-06-01
The Temsah field is located in eastern part of the Nile delta to seaward. The main reservoirs of the area are Middle Pliocene mainly consist from siliciclastic which associated with a close deep marine environment. The Distribution pattern of the reservoir facies is limited scale indicating fast lateral and vertical changes which are not easy to resolve by applying of conventional seismic attribute. The target of the present study is to create geophysical workflows to a better image of the channel sand distribution in the study area. We apply both Average Absolute Amplitude and Energy attribute which are indicated on the distribution of the sand bodies in the study area but filled to fully described the channel geometry. So another tool, which offers more detailed geometry description is needed. The spectral decomposition analysis method is an alternative technique focused on processing Discrete Fourier Transform which can provide better results. Spectral decomposition have been done over the upper channel shows that the frequency in the eastern part of the channel is the same frequency in places where the wells are drilled, which confirm the connection of both the eastern and western parts of the upper channel. Results suggest that application of the spectral decomposition method leads to reliable inferences. Hence, using the spectral decomposition method alone or along with other attributes has a positive impact on reserves growth and increased production where the reserve in the study area increases to 75bcf.
Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio; Ross, Kevin
2009-01-01
Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.
The Use of Decompositions in International Trade Textbooks.
ERIC Educational Resources Information Center
Highfill, Jannett K.; Weber, William V.
1994-01-01
Asserts that international trade, as compared with international finance or even international economics, is primarily an applied microeconomics field. Discusses decomposition analysis in relation to international trade and tariffs. Reports on an evaluation of the treatment of this topic in eight college-level economics textbooks. (CFR)
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry
NASA Astrophysics Data System (ADS)
Griff Freeman, R.; McCurdy, David L.
1998-08-01
A shortcoming of many undergraduate classes in analytical chemistry is that students receive little exposure to sample preparation in chemical analysis. This paper reports the progress made in introducing microwave sample decomposition into several quantitative analysis experiments at Truman State University. Two experiments being performed in our current laboratory rotation include closed vessel microwave decomposition applied to the classical gravimetric determination of nickel and the determination of sodium in snack foods by flame atomic emission spectrometry. A third lab, using open-vessel microwave decomposition for the Kjeldahl nitrogen determination is now ready for student trial. Microwave decomposition reduces the time needed to complete these experiments and significantly increases the student awareness of the importance of sample preparation in quantitative chemical analyses, providing greater breadth and realism in the experiments.
The trait contribution to wood decomposition rates of 15 Neotropical tree species.
van Geffen, Koert G; Poorter, Lourens; Sass-Klaassen, Ute; van Logtestijn, Richard S P; Cornelissen, Johannes H C
2010-12-01
The decomposition of dead wood is a critical uncertainty in models of the global carbon cycle. Despite this, relatively few studies have focused on dead wood decomposition, with a strong bias to higher latitudes. Especially the effect of interspecific variation in species traits on differences in wood decomposition rates remains unknown. In order to fill these gaps, we applied a novel method to study long-term wood decomposition of 15 tree species in a Bolivian semi-evergreen tropical moist forest. We hypothesized that interspecific differences in species traits are important drivers of variation in wood decomposition rates. Wood decomposition rates (fractional mass loss) varied between 0.01 and 0.31 yr(-1). We measured 10 different chemical, anatomical, and morphological traits for all species. The species' average traits were useful predictors of wood decomposition rates, particularly the average diameter (dbh) of the tree species (R2 = 0.41). Lignin concentration further increased the proportion of explained inter-specific variation in wood decomposition (both negative relations, cumulative R2 = 0.55), although it did not significantly explain variation in wood decomposition rates if considered alone. When dbh values of the actual dead trees sampled for decomposition rate determination were used as a predictor variable, the final model (including dead tree dbh and lignin concentration) explained even more variation in wood decomposition rates (R2 = 0.71), underlining the importance of dbh in wood decomposition. Other traits, including wood density, wood anatomical traits, macronutrient concentrations, and the amount of phenolic extractives could not significantly explain the variation in wood decomposition rates. The surprising results of this multi-species study, in which for the first time a large set of traits is explicitly linked to wood decomposition rates, merits further testing in other forest ecosystems.
A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis
NASA Technical Reports Server (NTRS)
Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
Fernandez, D.P.; Neff, J.C.; Belnap, J.; Reynolds, R.L.
2006-01-01
Decomposition is central to understanding ecosystem carbon exchange and nutrient-release processes. Unlike mesic ecosystems, which have been extensively studied, xeric landscapes have received little attention; as a result, abiotic soil-respiration regulatory processes are poorly understood in xeric environments. To provide a more complete and quantitative understanding about how abiotic factors influence soil respiration in xeric ecosystems, we conducted soil- respiration and decomposition-cloth measurements in the cold desert of southeast Utah. Our study evaluated when and to what extent soil texture, moisture, temperature, organic carbon, and nitrogen influence soil respiration and examined whether the inverse-texture hypothesis applies to decomposition. Within our study site, the effect of texture on moisture, as described by the inverse texture hypothesis, was evident, but its effect on decomposition was not. Our results show temperature and moisture to be the dominant abiotic controls of soil respiration. Specifically, temporal offsets in temperature and moisture conditions appear to have a strong control on soil respiration, with the highest fluxes occurring in spring when temperature and moisture were favorable. These temporal offsets resulted in decomposition rates that were controlled by soil moisture and temperature thresholds. The highest fluxes of CO2 occurred when soil temperature was between 10 and 16??C and volumetric soil moisture was greater than 10%. Decomposition-cloth results, which integrate decomposition processes across several months, support the soil-respiration results and further illustrate the seasonal patterns of high respiration rates during spring and low rates during summer and fall. Results from this study suggest that the parameters used to predict soil respiration in mesic ecosystems likely do not apply in cold-desert environments. ?? Springer 2006.
A test of the hierarchical model of litter decomposition.
Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H
2017-12-01
Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.
A Three-way Decomposition of a Total Effect into Direct, Indirect, and Interactive Effects
VanderWeele, Tyler J.
2013-01-01
Recent theory in causal inference has provided concepts for mediation analysis and effect decomposition that allow one to decompose a total effect into a direct and an indirect effect. Here, it is shown that what is often taken as an indirect effect can in fact be further decomposed into a “pure” indirect effect and a mediated interactive effect, thus yielding a three-way decomposition of a total effect (direct, indirect, and interactive). This three-way decomposition applies to difference scales and also to additive ratio scales and additive hazard scales. Assumptions needed for the identification of each of these three effects are discussed and simple formulae are given for each when regression models allowing for interaction are used. The three-way decomposition is illustrated by examples from genetic and perinatal epidemiology, and discussion is given to what is gained over the traditional two-way decomposition into simply a direct and an indirect effect. PMID:23354283
Card, Allison; Cross, Peter; Moffatt, Colin; Simmons, Tal
2015-07-01
Twenty Sus scrofa carcasses were used to study the effect the presence of clothing had on decomposition rate and colonization locations of Diptera species; 10 unclothed control carcasses were compared to 10 clothed experimental carcasses over 58 days. Data collection occurred at regular accumulated degree day intervals; the level of decomposition as Total Body Score (TBSsurf ), pattern of decomposition, and Diptera present was documented. Results indicated a statistically significant difference in the rate of decomposition, (t427 = 2.59, p = 0.010), with unclothed carcasses decomposing faster than clothed carcasses. However, the overall decomposition rates from each carcass group are too similar to separate when applying a 95% CI, which means that, although statistically significant, from a practical forensic point of view they are not sufficiently dissimilar as to warrant the application of different formulae to estimate the postmortem interval. Further results demonstrated clothing provided blow flies with additional colonization locations. © 2015 American Academy of Forensic Sciences.
Performance of Scattering Matrix Decomposition and Color Spaces for Synthetic Aperture Radar Imagery
2010-03-01
Color Spaces and Synthetic Aperture Radar (SAR) Multicolor Imaging. 15 2.3.1 Colorimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.2...III. Decomposition Techniques on SAR Polarimetry and Colorimetry applied to SAR Imagery...space polarimetric SAR systems. Colorimetry is also introduced in this chapter, presenting the fundamentals of the RGB and CMY color spaces, defined for
ERIC Educational Resources Information Center
Feng, Mingyu; Beck, Joseph E.; Heffernan, Neil T.
2009-01-01
A basic question of instructional interventions is how effective it is in promoting student learning. This paper presents a study to determine the relative efficacy of different instructional strategies by applying an educational data mining technique, learning decomposition. We use logistic regression to determine how much learning is caused by…
Educational Outcomes and Socioeconomic Status: A Decomposition Analysis for Middle-Income Countries
ERIC Educational Resources Information Center
Nieto, Sandra; Ramos, Raúl
2015-01-01
This article analyzes the factors that explain the gap in educational outcomes between the top and bottom quartile of students in different countries, according to their socioeconomic status. To do so, it uses PISA microdata for 10 middle-income and 2 high-income countries, and applies the Oaxaca-Blinder decomposition method. Its results show that…
NASA Astrophysics Data System (ADS)
Huang, Yan; Wang, Zhihui
2015-12-01
With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
Analysis on Vertical Scattering Signatures in Forestry with PolInSAR
NASA Astrophysics Data System (ADS)
Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen
2014-11-01
We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.
Isothermal Decomposition of Hydrogen Peroxide Dihydrate
NASA Technical Reports Server (NTRS)
Loeffler, M. J.; Baragiola, R. A.
2011-01-01
We present a new method of growing pure solid hydrogen peroxide in an ultra high vacuum environment and apply it to determine thermal stability of the dihydrate compound that forms when water and hydrogen peroxide are mixed at low temperatures. Using infrared spectroscopy and thermogravimetric analysis, we quantified the isothermal decomposition of the metastable dihydrate at 151.6 K. This decomposition occurs by fractional distillation through the preferential sublimation of water, which leads to the formation of pure hydrogen peroxide. The results imply that in an astronomical environment where condensed mixtures of H2O2 and H2O are shielded from radiolytic decomposition and warmed to temperatures where sublimation is significant, highly concentrated or even pure hydrogen peroxide may form.
NASA Astrophysics Data System (ADS)
Haris, A.; Pradana, G. S.; Riyanto, A.
2017-07-01
Tectonic setting of the Bird Head Papua Island becomes an important model for petroleum system in Eastern part of Indonesia. The current exploration has been started since the oil seepage finding in Bintuni and Salawati Basin. The biogenic gas in shallow layer turns out to become an interesting issue in the hydrocarbon exploration. The hydrocarbon accumulation appearance in a shallow layer with dry gas type, appeal biogenic gas for further research. This paper aims at delineating the sweet spot hydrocarbon potential in shallow layer by applying the spectral decomposition technique. The spectral decomposition is decomposing the seismic signal into an individual frequency, which has significant geological meaning. One of spectral decomposition methods is Continuous Wavelet Transform (CWT), which transforms the seismic signal into individual time and frequency simultaneously. This method is able to make easier time-frequency map analysis. When time resolution increases, the frequency resolution will be decreased, and vice versa. In this study, we perform low-frequency shadow zone analysis in which the amplitude anomaly at a low frequency of 15 Hz was observed and we then compare it to the amplitude at the mid (20 Hz) and the high-frequency (30 Hz). The appearance of the amplitude anomaly at a low frequency was disappeared at high frequency, this anomaly disappears. The spectral decomposition by using CWT algorithm has been successfully applied to delineate the sweet spot zone.
Becky A. Ball; Mark D. Hunter; John S. Kominoski; Christopher M. Swan; Mark A. Bradford
2008-01-01
Although litter decomposition is a fundamental ecological process, most of our understanding comes from studies of single-species decay. Recently, litter-mixing studies have tested whether monoculture data can be applied to mixed-litter systems. These studies have mainly attempted to detect non-additive effects of litter mixing, which address potential consequences of...
Application of composite dictionary multi-atom matching in gear fault diagnosis.
Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng
2011-01-01
The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.
Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James
2003-01-01
Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
Corrected confidence bands for functional data using principal components.
Goldsmith, J; Greven, S; Crainiceanu, C
2013-03-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.
Corrected Confidence Bands for Functional Data Using Principal Components
Goldsmith, J.; Greven, S.; Crainiceanu, C.
2014-01-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003
NASA Astrophysics Data System (ADS)
Benner, Ronald; Hatcher, Patrick G.; Hedges, John I.
1990-07-01
Changes in the chemical composition of mangrove ( Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed.
Benner, R.; Hatcher, P.G.; Hedges, J.I.
1990-01-01
Changes in the chemical composition of mangrove (Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed. ?? 1990.
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang
NASA Astrophysics Data System (ADS)
Ikasari, D. M.; Lestari, E. R.; Prastya, E.
2018-03-01
The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.
Method of Suppressing Sublimation in Advanced Thermoelectric Devices
NASA Technical Reports Server (NTRS)
Sakamoto, Jeffrey S. (Inventor); Caillat, Thierry (Inventor); Fleurial, Jean-Pierre (Inventor); Snyder, G. Jeffrey (Inventor)
2009-01-01
A method of applying a physical barrier to suppress thermal decomposition near a surface of a thermoelectric material including applying a continuous metal foil to a predetermined portion of the surface of the thermoelectric material, physically binding the continuous metal foil to the surface of the thermoelectric material using a binding member, and heating in a predetermined atmosphere the applied and physically bound continuous metal foil and the thermoelectric material to a sufficient temperature in order to promote bonding between the continuous metal foil and the surface of the thermoelectric material. The continuous metal foil forms a physical barrier to enclose a predetermined portion of the surface. Thermal decomposition is suppressed at the surface of the thermoelectric material enclosed by the physical barrier when the thermoelectric element is in operation.
Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures
NASA Astrophysics Data System (ADS)
Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en
2015-08-01
Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.
Du, Jingjing; Zhang, Yuyan; Guo, Wei; Li, Ningyun; Gao, Chaoshuai; Cui, Minghui; Lin, Zhongdian; Wei, Mingbao; Zhang, Hongzhong
2018-05-15
Titanium dioxide (TiO 2 ) nanoparticles have been applied in diverse commercial products, which could lead to toxic effects on aquatic microbes and would inhibit some important ecosystem processes. The study aimed to investigate the chronic impacts of TiO 2 nanoparticles with different concentrations (5, 50, and 500 mg L -1 ) on Populus nigra L. leaf decomposition in the freshwater ecosystem. After 50 d of decomposing, a significant decrease in decomposition rates was observed with higher concentrations of TiO 2 nanoparticles. During the period of litter decomposition, exposure of TiO 2 nanoparticles led to decreases in extracellular enzyme activities, which was caused by the reduction of microbial especially fungal biomass. In addition, the diversity and composition of the fungal community associated with litter decomposition were strongly affected by the concentrations of TiO 2 nanoparticles. The diversity and composition of the fungal community associated with litter decomposition was strongly affected. The abundance of Tricladium chaetocladium decreased with the increasing concentrations of TiO 2 nanoparticles, indicating the little contribution of the species to the litter decomposition. In conclusion, this study provided the evidence for the chronic exposure effects of TiO 2 nanoparticles on the litter decomposition and further the functions of freshwater ecosystems. Copyright © 2018 Elsevier B.V. All rights reserved.
Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors
NASA Astrophysics Data System (ADS)
Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea
2018-03-01
In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.
NASA Astrophysics Data System (ADS)
Bo, Zheng; Hao, Han; Yang, Shiling; Zhu, Jinhui; Yan, Jianhua; Cen, Kefa
2018-04-01
This work reports the catalytic performance of vertically-oriented graphenes (VGs) supported manganese oxide catalysts toward toluene decomposition in post plasma-catalysis (PPC) system. Dense networks of VGs were synthesized on carbon paper (CP) via a microwave plasma-enhanced chemical vapor deposition (PECVD) method. A constant current approach was applied in a conventional three-electrode electrochemical system for the electrodeposition of Mn3O4 catalysts on VGs. The as-obtained catalysts were characterized and investigated for ozone conversion and toluene decomposition in a PPC system. Experimental results show that the Mn3O4 catalyst loading mass on VG-coated CP was significantly higher than that on pristine CP (almost 1.8 times for an electrodeposition current of 10 mA). Moreover, the decoration of VGs led to both enhanced catalytic activity for ozone conversion and increased toluene decomposition, exhibiting a great promise in PPC system for the effective decomposition of volatile organic compounds.
X-Ray Thomson Scattering Without the Chihara Decomposition
NASA Astrophysics Data System (ADS)
Magyar, Rudolph; Baczewski, Andrew; Shulenburger, Luke; Hansen, Stephanie B.; Desjarlais, Michael P.; Sandia National Laboratories Collaboration
X-Ray Thomson Scattering is an important experimental technique used in dynamic compression experiments to measure the properties of warm dense matter. The fundamental property probed in these experiments is the electronic dynamic structure factor that is typically modeled using an empirical three-term decomposition (Chihara, J. Phys. F, 1987). One of the crucial assumptions of this decomposition is that the system's electrons can be either classified as bound to ions or free. This decomposition may not be accurate for materials in the warm dense regime. We present unambiguous first principles calculations of the dynamic structure factor independent of the Chihara decomposition that can be used to benchmark these assumptions. Results are generated using a finite-temperature real-time time-dependent density functional theory applied for the first time in these conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy
2018-04-01
In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.
Catalytic effect on ultrasonic decomposition of cellulose
NASA Astrophysics Data System (ADS)
Nomura, Shinfuku; Wakida, Kousuke; Mukasa, Shinobu; Toyota, Hiromichi
2018-07-01
Cellulase used as a catalyst is introduced into the ultrasonic welding method for cellulose decomposition in order to obtain glucose. By adding cellulase in the welding process, filter paper decomposes cellulose into glucose, 5-hydroxymethylfurfural (5-HMF), furfural, and oligosaccharides. The amount of glucose from hydrolysis was increased by ultrasonic welding in filter paper immersed in water. Most glucose was obtained by 100 W ultrasonic irradiation; however, when was applied 200 W, the dehydration of the glucose itself occurred, and was converted into 5-HMF owing to the thermolysis of ultrasonics. Therefore, there is an optimum welding power for the production of glucose from cellulose decomposition.
Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J.C.H.; Lung, H.; Katsumata, Y.
1995-12-01
In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.
Watermarking scheme based on singular value decomposition and homomorphic transform
NASA Astrophysics Data System (ADS)
Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu
2017-10-01
A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.
NDMA formation by chloramination of ranitidine: kinetics and mechanism.
Roux, Julien Le; Gallard, Hervé; Croué, Jean-Philippe; Papot, Sébastien; Deborde, Marie
2012-10-16
The kinetics of decomposition of the pharmaceutical ranitidine (a major precursor of NDMA) during chloramination was investigated and some decomposition byproducts were identified by using high performance liquid chromatography coupled with mass spectrometry (HPLC-MS). The reaction between monochloramine and ranitidine followed second order kinetics and was acid-catalyzed. Decomposition of ranitidine formed different byproducts depending on the applied monochloramine concentration. Most identified products were chlorinated and hydroxylated analogues of ranitidine. In excess of monochloramine, nucleophilic substitution between ranitidine and monochloramine led to byproducts that are critical intermediates involved in the formation of NDMA, for example, a carbocation formed from the decomposition of the methylfuran moiety of ranitidine. A complete mechanism is proposed to explain the high formation yield of NDMA from chloramination of ranitidine. These results are of great importance to understand the formation of NDMA by chloramination of tertiary amines.
Campos, Xochi; Germino, Matthew; de Graaff, Marie-Anne
2017-01-01
AimsChanging precipitation regimes in semiarid ecosystems will affect the balance of soil carbon (C) input and release, but the net effect on soil C storage is unclear. We asked how changes in the amount and timing of precipitation affect litter decomposition, and soil C stabilization in semiarid ecosystems.MethodsThe study took place at a long-term (18 years) ecohydrology experiment located in Idaho. Precipitation treatments consisted of a doubling of annual precipitation (+200 mm) added either in the cold-dormant season or in the growing season. Experimental plots were planted with big sagebrush (Artemisia tridentata), or with crested wheatgrass (Agropyron cristatum). We quantified decomposition of sagebrush leaf litter, and we assessed organic soil C (SOC) in aggregates, and silt and clay fractions.ResultsWe found that: (1) increased precipitation applied in the growing season consistently enhanced decomposition rates relative to the ambient treatment, and (2) precipitation applied in the dormant season enhanced soil C stabilization.ConclusionsThese data indicate that prolonged increases in precipitation can promote soil C storage in semiarid ecosystems, but only if these increases happen at times of the year when conditions allow for precipitation to promote plant C inputs rates to soil.
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
Soil organic matter decomposition follows plant productivity response to sea-level rise
NASA Astrophysics Data System (ADS)
Mueller, Peter; Jensen, Kai; Megonigal, James Patrick
2015-04-01
The accumulation of soil organic matter (SOM) is an important mechanism for many tidal wetlands to keep pace with sea-level rise. SOM accumulation is governed by the rates of production and decomposition of organic matter. While plant productivity responses to sea-level rise are well understood, far less is known about the response of SOM decomposition to accelerated sea-level rise. Here we quantified the effects of sea-level rise on SOM decomposition by exposing planted and unplanted tidal marsh monoliths to experimentally manipulated flood duration. The study was performed in a field-based mesocosm facility at the Smithsonian Global Change Research Wetland, a micro tidal brackish marsh in Maryland, US. SOM decomposition was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated using a stable carbon isotope approach. Despite the dogma that decomposition rates are inversely related to flooding, SOM mineralization was not sensitive to varying flood duration over a 35 cm range in surface elevation in unplanted mesocoms. In the presence of plants, decomposition rates were strongly and positively related to aboveground biomass (p≤0.01, R2≥0.59). We conclude that rates of soil carbon loss through decomposition are driven by plant responses to sea level in this intensively studied tidal marsh. If our result applies more generally to tidal wetlands, it has important implications for modeling carbon sequestration and marsh accretion in response to accelerated sea-level rise.
Yang, Chia Cheng; Chang, Shu Hao; Hong, Bao Zhen; Chi, Kai Hsien; Chang, Moo Been
2008-10-01
Development of effective PCDD/F (polychlorinated dibenzo-p-dioxin and dibenzofuran) control technologies is essential for environmental engineers and researchers. In this study, a PCDD/F-containing gas stream generating system was developed to investigate the efficiency and effectiveness of innovative PCDD/F control technologies. The system designed and constructed can stably generate the gas stream with the PCDD/F concentration ranging from 1.0 to 100ng TEQ Nm(-3) while reproducibility test indicates that the PCDD/F recovery efficiencies are between 93% and 112%. This new PCDD/F-containing gas stream generating device is first applied in the investigation of the catalytic PCDD/F control technology. The catalytic decomposition of PCDD/Fs was evaluated with two types of commercial V(2)O(5)-WO(3)/TiO(2)-based catalysts (catalyst A and catalyst B) at controlled temperature, water vapor content, and space velocity. 84% and 91% PCDD/F destruction efficiencies are achieved with catalysts A and B, respectively, at 280 degrees C with the space velocity of 5000h(-1). The results also indicate that the presence of water vapor inhibits PCDD/F decomposition due to its competition with PCDD/F molecules for adsorption on the active vanadia sites for both catalysts. In addition, this study combined integral reaction and Mars-Van Krevelen model to calculate the activation energies of OCDD and OCDF decomposition. The activation energies of OCDD and OCDF decomposition via catalysis are calculated as 24.8kJmol(-1) and 25.2kJmol(-1), respectively.
New spectrophotometric assay for pilocarpine.
El-Masry, S; Soliman, R
1980-07-01
A quick method for the determination of pilocarpine in eye drops in the presence of decomposition products is described. The method involves complexation of the alkaloid with bromocresol purple at pH 6. After treatment with 0.1N NaOH, the liberated dye is measured at 580 nm. The method has a relative standard deviation of 1.99%, and has been successfully applied to the analysis of 2 batches of pilocarpine eye drops. The recommended method was also used to monitor the stability of a pilocarpine nitrate solution in 0.05N NaOH at 65 degrees C. The BPC method failed to detect any significant decomposition after 2 h incubation, but the recommended method revealed 87.5% decomposition.
NASA Astrophysics Data System (ADS)
Udomsungworagul, A.; Charnsethikul, P.
2018-03-01
This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrens, R.; Minier, L.; Bulusu, S.
1998-12-31
The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed bymore » a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.« less
Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition
NASA Astrophysics Data System (ADS)
Hong, Sang-Hoon; Wdowinski, Shimon
2013-08-01
Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.
Mechanical and Assembly Units of Viral Capsids Identified via Quasi-Rigid Domain Decomposition
Polles, Guido; Indelicato, Giuliana; Potestio, Raffaello; Cermelli, Paolo; Twarock, Reidun; Micheletti, Cristian
2013-01-01
Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV) for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available. PMID:24244139
Effect of applied strain on phase separation of Fe-28 at.% Cr alloy: 3D phase-field simulation
NASA Astrophysics Data System (ADS)
Zhu, Lihui; Li, Yongsheng; Liu, Chengwei; Chen, Shi; Shi, Shujing; Jin, Shengshun
2018-04-01
A quantitative simulation of the separation of the α‧ phase in Fe-28 at.% Cr alloy under the effects of applied strain is performed by utilizing a three-dimensional phase-field model. The elongation of the Cr-enriched α‧ phase becomes obvious with the influence of applied uniaxial strain for the phase separation transforms from spinodal decomposition of 700 K to nucleation and growth of 773 K. The applied strain shows a significant influence on the early stage phase separation, and the influence is enlarged with the elevated temperature. The steady-state coarsening with the mechanism of spinodal decomposition is substantially affected by the applied strain for low-temperature aging, while the influence is reduced as the temperature increases and as the phase separation mechanism changes to nucleation and growth. The peak value of particle size distribution decreases, and the PSD for 773 K becomes more widely influenced by the applied strain. The simulation results of separation of the Cr-enriched α‧ phase with the applied strain provide a further understanding of the strain effect on the phase separation of Fe-Cr alloys from the metastable region to spinodal regions.
NASA Astrophysics Data System (ADS)
Wang, Xiaochen; Shao, Yun; Tian, Wei; Li, Kun
2018-06-01
This study explored different methodologies using a C-band RADARSAT-2 quad-polarized Synthetic Aperture Radar (SAR) image located over China's Yellow Sea to investigate polarization decomposition parameters for identifying mixed floating pollutants from a complex ocean background. It was found that solitary polarization decomposition did not meet the demand for detecting and classifying multiple floating pollutants, even after applying a polarized SAR image. Furthermore, considering that Yamaguchi decomposition is sensitive to vegetation and the algal variety Enteromorpha prolifera, while H/A/alpha decomposition is sensitive to oil spills, a combination of parameters which was deduced from these two decompositions was proposed for marine environmental monitoring of mixed floating sea surface pollutants. A combination of volume scattering, surface scattering, and scattering entropy was the best indicator for classifying mixed floating pollutants from a complex ocean background. The Kappa coefficients for Enteromorpha prolifera and oil spills were 0.7514 and 0.8470, respectively, evidence that the composite polarized parameters based on quad-polarized SAR imagery proposed in this research is an effective monitoring method for complex marine pollution.
A roadmap for bridging basic and applied research in forensic entomology.
Tomberlin, J K; Mohr, R; Benbow, M E; Tarone, A M; VanLaerhoven, S
2011-01-01
The National Research Council issued a report in 2009 that heavily criticized the forensic sciences. The report made several recommendations that if addressed would allow the forensic sciences to develop a stronger scientific foundation. We suggest a roadmap for decomposition ecology and forensic entomology hinging on a framework built on basic research concepts in ecology, evolution, and genetics. Unifying both basic and applied research fields under a common umbrella of terminology and structure would facilitate communication in the field and the production of scientific results. It would also help to identify novel research areas leading to a better understanding of principal underpinnings governing ecosystem structure, function, and evolution while increasing the accuracy of and ability to interpret entomological evidence collected from crime scenes. By following the proposed roadmap, a bridge can be built between basic and applied decomposition ecology research, culminating in science that could withstand the rigors of emerging legal and cultural expectations.
Senroy, Nilanjan [New Delhi, IN; Suryanarayanan, Siddharth [Littleton, CO
2011-03-15
A computer-implemented method of signal processing is provided. The method includes generating one or more masking signals based upon a computed Fourier transform of a received signal. The method further includes determining one or more intrinsic mode functions (IMFs) of the received signal by performing a masking-signal-based empirical mode decomposition (EMD) using the at least one masking signal.
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Vanroony, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
The use of the Cholesky decomposition technique is analyzed as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g. as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stablity problems are briefly discussed.
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Van Rooy, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
This report analyzes the use of the modified Cholesky decomposition technique as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g., as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stability problems are briefly discussed.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
López-Rodríguez, Patricia; Escot-Bocanegra, David; Fernández-Recio, Raúl; Bravo, Ignacio
2015-01-01
Radar high resolution range profiles are widely used among the target recognition community for the detection and identification of flying targets. In this paper, singular value decomposition is applied to extract the relevant information and to model each aircraft as a subspace. The identification algorithm is based on angle between subspaces and takes place in a transformed domain. In order to have a wide database of radar signatures and evaluate the performance, simulated range profiles are used as the recognition database while the test samples comprise data of actual range profiles collected in a measurement campaign. Thanks to the modeling of aircraft as subspaces only the valuable information of each target is used in the recognition process. Thus, one of the main advantages of using singular value decomposition, is that it helps to overcome the notable dissimilarities found in the shape and signal-to-noise ratio between actual and simulated profiles due to their difference in nature. Despite these differences, the recognition rates obtained with the algorithm are quite promising. PMID:25551484
NASA Astrophysics Data System (ADS)
Pandey, Anil; Niwa, Syunta; Morii, Yoshinari; Ikezawa, Shunjiro
2012-10-01
In order to decompose CO2 . NOx [1], we have developed the large flow atmospheric microwave plasma; LAMP [2]. It is very important to apply it for industrial innovation, so we have studied to apply the LAMP into motorcar. The characteristics of the developed LAMP are that the price is cheap and the decomposition efficiencies of CO2 . NOx are high. The mechanism was shown as the vertical configuration between the exhaust gas pipe and the waveguide was suitable [2]. The system was set up in the car body with a battery and an inverter. The battery is common between the engine and the inverter. In the application of motorcar, the flow is large, so the LAMP which has the merits of large flow, high efficient decomposition, and cheap apparatus will be superior.[4pt] [1] H. Barankova, L. Bardos, ISSP 2011, Kyoto.[0pt] [2] S. Ikezawa, S. Parajulee, S. Sharma, A. Pandey, ISSP 2011, Kyoto (2011) pp. 28-31; S. Ikezawa, S. Niwa, Y. Morii, JJAP meeting 2012, March 16, Waseda U. (2012).
NASA Astrophysics Data System (ADS)
Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.
2017-12-01
This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.
Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition
Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac
2013-01-01
Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772
NASA Astrophysics Data System (ADS)
Wang, Lynn T.-N.; Madhavan, Sriram
2018-03-01
A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
Electrochemical Protection of Thin Film Electrodes in Solid State Nanopores
Harrer, Stefan; Waggoner, Philip S.; Luan, Binquan; Afzali-Ardakani, Ali; Goldfarb, Dario L.; Peng, Hongbo; Martyna, Glenn; Rossnagel, Stephen M.; Stolovitzky, Gustavo A.
2011-01-01
We have eliminated electrochemical surface oxidation and reduction as well as water decomposition inside sub-5-nm wide nanopores in conducting TiN membranes using a surface passivation technique. Nanopore ionic conductances, and therefore pore diameters, were unchanged in passivated pores after applying potentials of ±4.5 V for as long as 24 h. Water decomposition was eliminated by using aqueous 90% glycerol solvent. The use of a protective self-assembled monolayer of hexadecylphosphonic acid was also investigated. PMID:21597142
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Guo, Feng; Cheng, Xin-lu; Zhang, Hong
2012-04-12
Which is the first step in the decomposition process of nitromethane is a controversial issue, proton dissociation or C-N bond scission. We applied reactive force field (ReaxFF) molecular dynamics to probe the initial decomposition mechanisms of nitromethane. By comparing the impact on (010) surfaces and without impact (only heating) for nitromethane simulations, we found that proton dissociation is the first step of the pyrolysis of nitromethane, and the C-N bond decomposes in the same time scale as in impact simulations, but in the nonimpact simulation, C-N bond dissociation takes place at a later time. At the end of these simulations, a large number of clusters are formed. By analyzing the trajectories, we discussed the role of the hydrogen bond in the initial process of nitromethane decompositions, the intermediates observed in the early time of the simulations, and the formation of clusters that consisted of C-N-C-N chain/ring structures.
Lin, Mu-Chien; Kao, Jui-Chung
2016-04-15
Bioremediation is currently extensively employed in the elimination of coastal oil pollution, but it is not very effective as the process takes several months to degrade oil. Among the components of oil, benzene degradation is difficult due to its stable characteristics. This paper describes an experimental study on the decomposition of benzene by titanium dioxide (TiO2) nanometer photocatalysis. The photocatalyst is illuminated with 360-nm ultraviolet light for generation of peroxide ions. This results in complete decomposition of benzene, thus yielding CO2 and H2O. In this study, a nonwoven fabric is coated with the photocatalyst and benzene. Using the Double-Shot Py-GC system on the residual component, complete decomposition of the benzene was verified by 4h of exposure to ultraviolet light. The method proposed in this study can be directly applied to elimination of marine oil pollution. Further studies will be conducted on coastal oil pollution in situ. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang
2018-03-01
In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.
Rotational-path decomposition based recursive planning for spacecraft attitude reorientation
NASA Astrophysics Data System (ADS)
Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying
2018-02-01
The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.
Layout decomposition of self-aligned double patterning for 2D random logic patterning
NASA Astrophysics Data System (ADS)
Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.
2011-04-01
Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.
van Daalen, Marjolijn A; de Kat, Dorothée S; Oude Grotebevelsborg, Bernice F L; de Leeuwe, Roosje; Warnaar, Jeroen; Oostra, Roelof Jan; M Duijst-Heesters, Wilma L J
2017-03-01
This study aimed to develop an aquatic decomposition scoring (ADS) method and investigated the predictive value of this method in estimating the postmortem submersion interval (PMSI) of bodies recovered from the North Sea. This method, consisting of an ADS item list and a pictorial reference atlas, showed a high interobserver agreement (Krippendorff's alpha ≥ 0.93) and hence proved to be valid. This scoring method was applied to data, collected from closed cases-cases in which the postmortal submersion interval (PMSI) was known-concerning bodies recovered from the North Sea from 1990 to 2013. Thirty-eight cases met the inclusion criteria and were scored by quantifying the observed total aquatic decomposition score (TADS). Statistical analysis demonstrated that TADS accurately predicts the PMSI (p < 0.001), confirming that the decomposition process in the North Sea is strongly correlated to time. © 2017 American Academy of Forensic Sciences.
Pernin, Céline; Cortet, Jérôme; Joffre, Richard; Le Petit, Jean; Torre, Franck
2006-01-01
Effects of sewage sludge on litter mesofauna communities (Collembola and Acari) and cork oak (Quercus suber L.) leaf litter decomposition have been studied during 18 mo using litterbags in an in situ experimental forest firebreak in southeastern France. The sludge (2.74 t DM ha(-1) yr(-1)) was applied to fertilize and maintain a pasture created on the firebreak. Litterbag colonization had similar dynamics on both the control and fertilized plots and followed a typical Mediterranean pattern showing a greater abundance in spring and autumn and a lower abundance in summer. After 9 mo of litter colonization, Collembola and Acari, but mainly Oribatida, were more abundant on the sludge-fertilized plot. Leaf litter decomposition showed a similar pattern on both plots, but it was faster on the control plot. Furthermore, leaves from the fertilized plot were characterized by greater nitrogen content. Both chemical composition of leaves and sludges and the decomposition state of leaves have significantly affected the mesofauna community composition from each plot.
Singer, S S
1985-08-01
(Hydroxyalkyl)nitrosoureas and the related cyclic carbamates N-nitrosooxazolidones are potent carcinogens. The decompositions of four such compounds, 1-nitroso-1-(2-hydroxyethyl)urea (I), 3-nitrosooxazolid-2-one (II), 1-nitroso-1-(2-hydroxypropyl)urea (III), and 5-methyl-3-nitrosooxazolid-2-one (IV), in aqueous buffers at physiological pH were studied to determine if any obvious differences in decomposition pathways could account for the variety of tumors obtained from these four compounds. The products predicted by the literature mechanisms for nitrosourea and nitrosooxazolidone decompositions (which were derived from experiments at pH 10-12) were indeed the products formed, including glycols, active carbonyl compounds, epoxides, and, from the oxazolidones, cyclic carbonates. Furthermore, it was shown that in pH 6.4-7.4 buffer epoxides were stable reaction products. However, in the presence of hepatocytes, most of the epoxide was converted to glycol. The analytical methods developed were then applied to the analysis of the decomposition products of some related dialkylnitrosoureas, and similar results were obtained. The formation of chemically reactive secondary products and the possible relevance of these results to carcinogenesis studies are discussed.
Liu, Zhichao; Wu, Qiong; Zhu, Weihua; Xiao, Heming
2015-04-28
Density functional theory with dispersion-correction (DFT-D) was employed to study the effects of vacancy and pressure on the structure and initial decomposition of crystalline 5-nitro-2,4-dihydro-3H-1,2,4-triazol-3-one (β-NTO), a high-energy insensitive explosive. A comparative analysis of the chemical behaviors of NTO in the ideal bulk crystal and vacancy-containing crystals under applied hydrostatic compression was considered. Our calculated formation energy, vacancy interaction energy, electron density difference, and frontier orbitals reveal that the stability of NTO can be effectively manipulated by changing the molecular environment. Bimolecular hydrogen transfer is suggested to be a potential initial chemical reaction in the vacancy-containing NTO solid at 50 GPa, which is prior to the C-NO2 bond dissociation as its initiation decomposition in the gas phase. The vacancy defects introduced into the ideal bulk NTO crystal can produce a localized site, where the initiation decomposition is preferentially accelerated and then promotes further decompositions. Our results may shed some light on the influence of the molecular environments on the initial pathways in molecular explosives.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Supriya; Srivastava, Pratibha; Singh, Gurdip, E-mail: gsingh4us@yahoo.com
2013-02-15
Graphical abstract: Prepared nanoferrites were characterized by FE-SEM and bright field TEM micrographs. The catalytic effect of these nanoferrites was evaluated on the thermal decomposition of ammonium perchlorate using TG and TG–DSC techniques. The kinetics of thermal decomposition of AP was evaluated using isothermal TG data by model fitting as well as isoconversional method. Display Omitted Highlights: ► Synthesis of ferrite nanostructures (∼20.0 nm) by wet-chemical method under different synthetic conditions. ► Characterization using XRD, FE-SEM, EDS, TEM, HRTEM and SAED pattern. ► Catalytic activity of ferrite nanostructures on AP thermal decomposition by thermal techniques. ► Burning rate measurements ofmore » CSPs with ferrite nanostructures. ► Kinetics of thermal decomposition of AP + nanoferrites. -- Abstract: In this paper, the nanoferrites of Mn, Co and Ni were synthesized by wet chemical method and characterized by X-ray diffraction (XRD), field emission scanning electron microscopy (FE-SEM), energy dispersive, X-ray spectra (EDS), transmission electron microscopy (TEM) and high resolution transmission electron microscopy (HR-TEM). It is catalytic activity were investigated on the thermal decomposition of ammonium perchlorate (AP) and composite solid propellants (CSPs) using thermogravimetry (TG), TG coupled with differential scanning calorimetry (TG–DSC) and ignition delay measurements. Kinetics of thermal decomposition of AP + nanoferrites have also been investigated using isoconversional and model fitting approaches which have been applied to data for isothermal TG decomposition. The burning rate of CSPs was considerably enhanced by these nanoferrites. Addition of nanoferrites to AP led to shifting of the high temperature decomposition peak toward lower temperature. All these studies reveal that ferrite nanorods show the best catalytic activity superior to that of nanospheres and nanocubes.« less
Biondi, M; Vanzi, E; De Otto, G; Banci Buonamici, F; Belmonte, G M; Mazzoni, L N; Guasti, A; Carbone, S F; Mazzei, M A; La Penna, A; Foderà, E; Guerreri, D; Maiolino, A; Volterrani, L
2016-12-01
Many studies aimed at validating the application of Dual Energy Computed Tomography (DECT) in clinical practice where conventional CT is not exhaustive. An example is given by bone marrow oedema detection, in which DECT based on water/calcium (W/Ca) decomposition was applied. In this paper a new DECT approach, based on water/cortical bone (W/CB) decomposition, was investigated. Eight patients suffering from marrow oedema were scanned with MRI and DECT. Two-materials density decomposition was performed in ROIs corresponding to normal bone marrow and oedema. These regions were drawn on DECT images using MRI informations. Both W/Ca and W/CB were considered as material basis. Scatter plots of W/Ca and W/CB concentrations were made for each ROI in order to evaluate if oedema could be distinguished from normal bone marrow. Thresholds were defined on the scatter plots in order to produce DECT images where oedema regions were highlighted through color maps. The agreement between these images and MR was scored by two expert radiologists. For all the patients, the best scores were obtained using W/CB density decomposition. In all cases, DECT color map images based on W/CB decomposition showed better agreement with MR in bone marrow oedema identification with respect to W/Ca decomposition. This result encourages further studies in order to evaluate if DECT based on W/CB decomposition could be an alternative technique to MR, which would be important when short scanning duration is relevant, as in the case of aged or traumatic patients. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941
Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.
Liorni, I; Parazzini, M; Fiocchi, S; Guadagnin, V; Ravazzani, P
2014-01-01
Polynomial Chaos (PC) is a decomposition method used to build a meta-model, which approximates the unknown response of a model. In this paper the PC method is applied to the stochastic dosimetry to assess the variability of human exposure due to the change of the orientation of the B-field vector respect to the human body. In detail, the analysis of the pregnant woman exposure at 7 months of gestational age is carried out, to build-up a statistical meta-model of the induced electric field for each fetal tissue and in the fetal whole-body by means of the PC expansion as a function of the B-field orientation, considering a uniform exposure at 50 Hz.
A Tensor-Train accelerated solver for integral equations in complex geometries
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Rahimian, Abtin; Zorin, Denis
2017-04-01
We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest O (log N) and once the inverse is computed, it can be applied in O (Nlog N) . We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries.
Chemistry That Applies. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
"Chemistry That Applies" is an instructional unit designed to help students in grades 8-10 understand the law of conservation of matter. It consists of 24 lessons organized in four clusters. Working in groups, students explore four chemical reactions: burning, rusting, the decomposition of water, and the reaction of baking soda and…
NASREN: Standard reference model for telerobot control
NASA Technical Reports Server (NTRS)
Albus, J. S.; Lumia, R.; Mccain, H.
1987-01-01
A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.
Sugiyama, Kazuo; Suzuki, Katsunori; Kuwasima, Shusuke; Aoki, Yosuke; Yajima, Tatsuhiko
2009-01-01
The decomposition of a poly(amide-imide) thin film coated on a solid copper wire was attempted using atmospheric pressure non-equilibrium plasma. The plasma was produced by applying microwave power to an electrically conductive material in a gas mixture of argon, oxygen, and hydrogen. The poly(amide-imide) thin film was easily decomposed by argon-oxygen mixed gas plasma and an oxidized copper surface was obtained. The reduction of the oxidized surface with argon-hydrogen mixed gas plasma rapidly yielded a metallic copper surface. A continuous plasma heat-treatment process using a combination of both the argon-oxygen plasma and argon-hydrogen plasma was found to be suitable for the decomposition of the poly(amide-imide) thin film coated on the solid copper wire.
Numeric Modified Adomian Decomposition Method for Power System Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth
This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
Plants mediate soil organic matter decomposition in response to sea level rise.
Mueller, Peter; Jensen, Kai; Megonigal, James Patrick
2016-01-01
Tidal marshes have a large capacity for producing and storing organic matter, making their role in the global carbon budget disproportionate to land area. Most of the organic matter stored in these systems is in soils where it contributes 2-5 times more to surface accretion than an equal mass of minerals. Soil organic matter (SOM) sequestration is the primary process by which tidal marshes become perched high in the tidal frame, decreasing their vulnerability to accelerated relative sea level rise (RSLR). Plant growth responses to RSLR are well understood and represented in century-scale forecast models of soil surface elevation change. We understand far less about the response of SOM decomposition to accelerated RSLR. Here we quantified the effects of flooding depth and duration on SOM decomposition by exposing planted and unplanted field-based mesocosms to experimentally manipulated relative sea level over two consecutive growing seasons. SOM decomposition was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated via δ(13) CO2 . Despite the dominant paradigm that decomposition rates are inversely related to flooding, SOM decomposition in the absence of plants was not sensitive to flooding depth and duration. The presence of plants had a dramatic effect on SOM decomposition, increasing SOM-derived CO2 flux by up to 267% and 125% (in 2012 and 2013, respectively) compared to unplanted controls in the two growing seasons. Furthermore, plant stimulation of SOM decomposition was strongly and positively related to plant biomass and in particular aboveground biomass. We conclude that SOM decomposition rates are not directly driven by relative sea level and its effect on oxygen diffusion through soil, but indirectly by plant responses to relative sea level. If this result applies more generally to tidal wetlands, it has important implications for models of SOM accumulation and surface elevation change in response to accelerated RSLR. © 2015 John Wiley & Sons Ltd.
Functional and Structural Succession of Soil Microbial Communities below Decomposing Human Cadavers
Cobaugh, Kelly L.; Schaeffer, Sean M.; DeBruyn, Jennifer M.
2015-01-01
The ecological succession of microbes during cadaver decomposition has garnered interest in both basic and applied research contexts (e.g. community assembly and dynamics; forensic indicator of time since death). Yet current understanding of microbial ecology during decomposition is almost entirely based on plant litter. We know very little about microbes recycling carcass-derived organic matter despite the unique decomposition processes. Our objective was to quantify the taxonomic and functional succession of microbial populations in soils below decomposing cadavers, testing the hypotheses that a) periods of increased activity during decomposition are associated with particular taxa; and b) human-associated taxa are introduced to soils, but do not persist outside their host. We collected soils from beneath four cadavers throughout decomposition, and analyzed soil chemistry, microbial activity and bacterial community structure. As expected, decomposition resulted in pulses of soil C and nutrients (particularly ammonia) and stimulated microbial activity. There was no change in total bacterial abundances, however we observed distinct changes in both function and community composition. During active decay (7 - 12 days postmortem), respiration and biomass production rates were high: the community was dominated by Proteobacteria (increased from 15.0 to 26.1% relative abundance) and Firmicutes (increased from 1.0 to 29.0%), with reduced Acidobacteria abundances (decreased from 30.4 to 9.8%). Once decay rates slowed (10 - 23 d postmortem), respiration was elevated, but biomass production rates dropped dramatically; this community with low growth efficiency was dominated by Firmicutes (increased to 50.9%) and other anaerobic taxa. Human-associated bacteria, including the obligately anaerobic Bacteroides, were detected at high concentrations in soil throughout decomposition, up to 198 d postmortem. Our results revealed the pattern of functional and compositional succession in soil microbial communities during decomposition of human-derived organic matter, provided insight into decomposition processes, and identified putative predictor populations for time since death estimation. PMID:26067226
Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2013-04-01
We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Low-dimensional and Data Fusion Techniques Applied to a Rectangular Supersonic Multi-stream Jet
NASA Astrophysics Data System (ADS)
Berry, Matthew; Stack, Cory; Magstadt, Andrew; Ali, Mohd; Gaitonde, Datta; Glauser, Mark
2017-11-01
Low-dimensional models of experimental and simulation data for a complex supersonic jet were fused to reconstruct time-dependent proper orthogonal decomposition (POD) coefficients. The jet consists of a multi-stream rectangular single expansion ramp nozzle, containing a core stream operating at Mj , 1 = 1.6 , and bypass stream at Mj , 3 = 1.0 with an underlying deck. POD was applied to schlieren and PIV data to acquire the spatial basis functions. These eigenfunctions were projected onto their corresponding time-dependent large eddy simulation (LES) fields to reconstruct the temporal POD coefficients. This reconstruction was able to resolve spectral peaks that were previously aliased due to the slower sampling rates of the experiments. Additionally, dynamic mode decomposition (DMD) was applied to the experimental and LES datasets, and the spatio-temporal characteristics were compared to POD. The authors would like to acknowledge AFOSR, program manager Dr. Doug Smith, for funding this research, Grant No. FA9550-15-1-0435.
Stability analysis of gyroscopic systems with delay via decomposition
NASA Astrophysics Data System (ADS)
Aleksandrov, A. Yu.; Zhabko, A. P.; Chen, Y.
2018-05-01
A mechanical system describing by the second order linear differential equations with a positive parameter at the velocity forces and with time delay in the positional forces is studied. Using the decomposition method and Lyapunov-Krasovskii functionals, conditions are obtained under which from the asymptotic stability of two auxiliary first order subsystems it follows that, for sufficiently large values of the parameter, the original system is also asymptotically stable. Moreover, it is shown that the proposed approach can be applied to the stability investigation of linear gyroscopic systems with switched positional forces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Ning; Shen, Tielong; Kurtz, Richard
The properties of nano-scale interstitial dislocation loops under the coupling effect of stress and temperature are studied using atomistic simulation methods and experiments. The decomposition of a loop by the emission of smaller loops is identified as one of the major mechanisms to release the localized stress induced by the coupling effect, which is validated by the TEM observations. The classical conservation law of Burgers vector cannot be applied during such decomposition process. The dislocation network is formed from the decomposed loops, which may initiate the irradiation creep much earlier than expected through the mechanism of climb-controlled glide of dislocations.
Długosz, Maciej; Trylska, Joanna
2008-01-01
We present a method for describing and comparing global electrostatic properties of biomolecules based on the spherical harmonic decomposition of electrostatic potential data. Unlike other approaches our method does not require any prior three dimensional structural alignment. The electrostatic potential, given as a volumetric data set from a numerical solution of the Poisson or Poisson–Boltzmann equation, is represented with descriptors that are rotation invariant. The method can be applied to large and structurally diverse sets of biomolecules enabling to cluster them according to their electrostatic features. PMID:18624502
Characteristic-eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1989-01-01
Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.
Resolvent estimates in homogenisation of periodic problems of fractional elasticity
NASA Astrophysics Data System (ADS)
Cherednichenko, Kirill; Waurick, Marcus
2018-03-01
We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.
NASA Astrophysics Data System (ADS)
Hu, Bin; Dong, Qunxi; Hao, Yanrong; Zhao, Qinglin; Shen, Jian; Zheng, Fang
2017-08-01
Objective. Neuro-electrophysiological tools have been widely used in heroin addiction studies. Previous studies indicated that chronic heroin abuse would result in abnormal functional organization of the brain, while few heroin addiction studies have applied the effective connectivity tool to analyze the brain functional system (BFS) alterations induced by heroin abuse. The present study aims to identify the abnormality of resting-state heroin abstinent BFS using source decomposition and effective connectivity tools. Approach. The resting-state electroencephalograph (EEG) signals were acquired from 15 male heroin abstinent (HA) subjects and 14 male non-addicted (NA) controls. Multivariate autoregressive models combined independent component analysis (MVARICA) was applied for blind source decomposition. Generalized partial directed coherence (GPDC) was applied for effective brain connectivity analysis. Effective brain networks of both HA and NA groups were constructed. The two groups of effective cortical networks were compared by the bootstrap method. Abnormal causal interactions between decomposed source regions were estimated in the 1-45 Hz frequency domain. Main results. This work suggested: (a) there were clear effective network alterations in heroin abstinent subject groups; (b) the parietal region was a dominant hub of the abnormally weaker causal pathways, and the left occipital region was a dominant hub of the abnormally stronger causal pathways. Significance. These findings provide direct evidence that chronic heroin abuse induces brain functional abnormalities. The potential value of combining effective connectivity analysis and brain source decomposition methods in exploring brain alterations of heroin addicts is also implied.
Hu, Bin; Dong, Qunxi; Hao, Yanrong; Zhao, Qinglin; Shen, Jian; Zheng, Fang
2017-08-01
Neuro-electrophysiological tools have been widely used in heroin addiction studies. Previous studies indicated that chronic heroin abuse would result in abnormal functional organization of the brain, while few heroin addiction studies have applied the effective connectivity tool to analyze the brain functional system (BFS) alterations induced by heroin abuse. The present study aims to identify the abnormality of resting-state heroin abstinent BFS using source decomposition and effective connectivity tools. The resting-state electroencephalograph (EEG) signals were acquired from 15 male heroin abstinent (HA) subjects and 14 male non-addicted (NA) controls. Multivariate autoregressive models combined independent component analysis (MVARICA) was applied for blind source decomposition. Generalized partial directed coherence (GPDC) was applied for effective brain connectivity analysis. Effective brain networks of both HA and NA groups were constructed. The two groups of effective cortical networks were compared by the bootstrap method. Abnormal causal interactions between decomposed source regions were estimated in the 1-45 Hz frequency domain. This work suggested: (a) there were clear effective network alterations in heroin abstinent subject groups; (b) the parietal region was a dominant hub of the abnormally weaker causal pathways, and the left occipital region was a dominant hub of the abnormally stronger causal pathways. These findings provide direct evidence that chronic heroin abuse induces brain functional abnormalities. The potential value of combining effective connectivity analysis and brain source decomposition methods in exploring brain alterations of heroin addicts is also implied.
Shoot litter breakdown and zinc dynamics of an aquatic plant, Schoenoplectus californicus.
Arreghini, Silvana; de Cabo, Laura; Serafini, Roberto José María; Fabrizio de Iorio, Alicia
2018-07-03
Decomposition of plant debris is an important process in determining the structure and function of aquatic ecosystems. The aims were to find a mathematic model fitting the decomposition process of Schoenoplectus californicus shoots containing different Zn concentrations; compare the decomposition rates; and assess metal accumulation/mobilization during decomposition. A litterbag technique was applied with shoots containing three levels of Zn: collected from an unpolluted river (RIV) and from experimental populations at low (LoZn) and high (HiZn) Zn supply. The double exponential model explained S. californicus shoot decomposition, at first, higher initial proportion of refractory fraction in RIV detritus determined a lower decay rate and until 68 days, RIV and LoZn detritus behaved like a source of metal, releasing soluble/weakly bound zinc into the water; after 68 days, they became like a sink. However, HiZn detritus showed rapid release into the water during the first 8 days, changing to the sink condition up to 68 days, and then returning to the source condition up to 369 days. The knowledge of the role of detritus (sink/source) will allow defining a correct management of the vegetation used for zinc removal and providing a valuable tool for environmental remediation and rehabilitation planning.
Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data
Clark, Darin P.; Badea, Cristian T.
2014-01-01
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173
Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.
Clark, Darin P; Badea, Cristian T
2014-11-07
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.
Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.
2013-01-01
The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476
Meguerdichian, Andrew G; Jafari, Tahereh; Shakil, Md R; Miao, Ran; Achola, Laura A; Macharia, John; Shirazi-Amin, Alireza; Suib, Steven L
2018-02-19
Electrocatalytic decomposition of urea for the production of hydrogen, H 2, for clean energy applications, such as in fuel cells, has several potential advantages such as reducing carbon emissions in the energy sector and environmental applications to remove urea from animal and human waste facilities. The study and development of new catalyst materials containing nickel metal, the active site for urea decomposition, is a critical aspect of research in inorganic and materials chemistry. We report the synthesis and application of [NH 4 ]NiPO 4 ·6H 2 O and β-Ni 2 P 2 O 7 using in situ prepared [NH 4 ] 2 HPO 4 . The [NH 4 ]NiPO 4 ·6H 2 O is calcined at varying temperatures and tested for electrocatalytic decomposition of urea. Our results indicate that [NH 4 ]NiPO 4 ·6H 2 O calcined at 300 °C with an amorphous crystal structure and, for the first time applied for urea electrocatalytic decomposition, had the greatest reported electroactive surface area (ESA) of 142 cm 2 /mg and an onset potential of 0.33 V (SCE) and was stable over a 24-h test period.
Shahid, Muhammad; Xue, Xinkai; Fan, Chao; Ninham, Barry W; Pashley, Richard M
2015-06-25
An enhanced thermal decomposition of chemical compounds in aqueous solution has been achieved at reduced solution temperatures. The technique exploits hitherto unrecognized properties of a bubble column evaporator (BCE). It offers better heat transfer efficiency than conventional heat transfer equipment. This is obtained via a continuous flow of hot, dry air bubbles of optimal (1-3 mm) size. Optimal bubble size is maintained by using the bubble coalescence inhibition property of some salts. This novel method is illustrated by a study of thermal decomposition of ammonium bicarbonate (NH4HCO3) and potassium persulfate (K2S2O8) in aqueous solutions. The decomposition occurs at significantly lower temperatures than those needed in bulk solution. The process appears to work via the continuous production of hot (e.g., 150 °C) dry air bubbles, which do not heat the solution significantly but produce a transient hot surface layer around each rising bubble. This causes the thermal decomposition of the solute. The decomposition occurs due to the effective collision of the solute with the surface of the hot bubbles. The new process could, for example, be applied to the regeneration of the ammonium bicarbonate draw solution used in forward osmosis.
Pascual, Javier; von Hoermann, Christian; Rottler-Hoermann, Ann-Marie; Nevo, Omer; Geppert, Alicia; Sikorski, Johannes; Huber, Katharina J; Steiger, Sandra; Ayasse, Manfred; Overmann, Jörg
2017-08-01
The decomposition of dead mammalian tissue involves a complex temporal succession of epinecrotic bacteria. Microbial activity may release different cadaveric volatile organic compounds which in turn attract other key players of carcass decomposition such as scavenger insects. To elucidate the dynamics and potential functions of epinecrotic bacteria on carcasses, we monitored bacterial communities developing on still-born piglets incubated in different forest ecosystems by combining high-throughput Illumina 16S rRNA sequencing with gas chromatography-mass spectrometry of volatiles. Our results show that the community structure of epinecrotic bacteria and the types of cadaveric volatile compounds released over the time course of decomposition are driven by deterministic rather than stochastic processes. Individual cadaveric volatile organic compounds were correlated with specific taxa during the first stages of decomposition which are dominated by bacteria. Through best-fitting multiple linear regression models, the synthesis of acetic acid, indole and phenol could be linked to the activity of Enterobacteriaceae, Tissierellaceae and Xanthomonadaceae, respectively. These conclusions are also commensurate with the metabolism described for the dominant taxa identified for these families. The predictable nature of in situ synthesis of cadaveric volatile organic compounds by epinecrotic bacteria provides a new basis for future chemical ecology and forensic studies. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.
Decomposition of timed automata for solving scheduling problems
NASA Astrophysics Data System (ADS)
Nishi, Tatsushi; Wakatake, Masato
2014-03-01
A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.
Dynamic correlations at different time-scales with empirical mode decomposition
NASA Astrophysics Data System (ADS)
Nava, Noemi; Di Matteo, T.; Aste, Tomaso
2018-07-01
We introduce a simple approach which combines Empirical Mode Decomposition (EMD) and Pearson's cross-correlations over rolling windows to quantify dynamic dependency at different time scales. The EMD is a tool to separate time series into implicit components which oscillate at different time-scales. We apply this decomposition to intraday time series of the following three financial indices: the S&P 500 (USA), the IPC (Mexico) and the VIX (volatility index USA), obtaining time-varying multidimensional cross-correlations at different time-scales. The correlations computed over a rolling window are compared across the three indices, across the components at different time-scales and across different time lags. We uncover a rich heterogeneity of interactions, which depends on the time-scale and has important lead-lag relations that could have practical use for portfolio management, risk estimation and investment decisions.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Studies on seasonal arthropod succession on carrion in the southeastern Iberian Peninsula.
Arnaldos, M I; Romera, E; Presa, J J; Luna, A; García, M D
2004-08-01
A global study of the sarcosaprophagous community that occurs in the southeastern Iberian Peninsula during all four seasons is made for the first time, and its diversity is described with reference to biological indices. A total of 18,179 adults and, additionally, a number of preimaginal states were collected. The results for the main arthropod groups, and their diversity are discussed in relation to the season and decompositional stages. The results provide an extensive inventory of carrion-associated arthropods. An association between decomposition stages and more representative arthropod groups is established. With respect to the biological indices applied, Margalef's index shows that the diversity of the community increases as the state of decomposition advances, while Sorenson's quantitative index shows that the greatest similarities are between spring and summer on the one hand, and fall and winter, on the other.
An examination of the concept of driving point receptance
NASA Astrophysics Data System (ADS)
Sheng, X.; He, Y.; Zhong, T.
2018-04-01
In the field of vibration, driving point receptance is a well-established and widely applied concept. However, as demonstrated in this paper, when a driving point receptance is calculated using the finite element (FE) method with solid elements, it does not converge as the FE mesh becomes finer, suggesting that there is a singularity. Hence, the concept of driving point receptance deserves a rigorous examination. In this paper, it is firstly shown that, for a point harmonic force applied on the surface of an elastic half-space, the Boussinesq formula can be applied to calculate the displacement amplitude of the surface if the response point is sufficiently close to the load. Secondly, by applying the Betti reciprocal theorem, it is shown that the displacement of an elastic body near a point harmonic force can be decomposed into two parts, with the first one being the displacement of an elastic half-space. This decomposition is useful, since it provides a solid basis for the introduction of a contact spring between a wheel and a rail in interaction. However, according to the Boussinesq formula, this decomposition also leads to the conclusion that a driving point receptance is infinite (singular), and would be undefinable. Nevertheless, driving point receptances have been calculated using different methods. Since the singularity identified in this paper was not appreciated, no account was given to the singularity in these calculations. Thus, the validity of these calculation methods must be examined. This constructs the third part of the paper. As the final development of the paper, the above decomposition is utilised to define and determine driving point receptances required for dealing with wheel/rail interactions.
s-core network decomposition: A generalization of k-core analysis to weighted networks
NASA Astrophysics Data System (ADS)
Eidsaa, Marius; Almaas, Eivind
2013-12-01
A broad range of systems spanning biology, technology, and social phenomena may be represented and analyzed as complex networks. Recent studies of such networks using k-core decomposition have uncovered groups of nodes that play important roles. Here, we present s-core analysis, a generalization of k-core (or k-shell) analysis to complex networks where the links have different strengths or weights. We demonstrate the s-core decomposition approach on two random networks (ER and configuration model with scale-free degree distribution) where the link weights are (i) random, (ii) correlated, and (iii) anticorrelated with the node degrees. Finally, we apply the s-core decomposition approach to the protein-interaction network of the yeast Saccharomyces cerevisiae in the context of two gene-expression experiments: oxidative stress in response to cumene hydroperoxide (CHP), and fermentation stress response (FSR). We find that the innermost s-cores are (i) different from innermost k-cores, (ii) different for the two stress conditions CHP and FSR, and (iii) enriched with proteins whose biological functions give insight into how yeast manages these specific stresses.
Catalytic properties of mesoporous Al–La–Mn oxides prepared via spray pyrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Goun; Jung, Kyeong Youl; Lee, Choul-Ho
Highlights: • Al–La–Mn oxides were prepared using spray pyrolysis. • Al–La–Mn oxides exhibit large and uniform pore sizes. • Mesoporous Al–La–Mn oxides were compared with those prepared by conventional precipitation. • Mesoporous Al–La–Mn oxides show superior activity in decomposition of hydrogen peroxide. - Abstract: Mesoporous Al–La–Mn oxides are prepared via spray pyrolysis and are applied to the catalytic decomposition of hydrogen peroxide. The characteristics of the mesoporous Al–La–Mn oxides are examined using N{sub 2} adsorption, X-ray diffraction, and X-ray fluorescence measurements. The surface area and pore size of the Al–La–Mn oxides prepared via spray pyrolysis are larger than those ofmore » the Al–La–Mn oxides prepared using a precipitation method. The catalytic performance of the materials during the decomposition of hydrogen peroxide is examined in a pulse-injection reactor. It is confirmed that the mesoporous Al–La–Mn oxides prepared via spray pyrolysis exhibit higher catalytic activity and stability in the decomposition of hydrogen peroxide than Al–La–Mn oxides prepared using a conventional precipitation method.« less
High-purity Cu nanocrystal synthesis by a dynamic decomposition method.
Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui
2014-12-01
Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.
High-purity Cu nanocrystal synthesis by a dynamic decomposition method
NASA Astrophysics Data System (ADS)
Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui
2014-12-01
Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Gibbsian Stationary Non-equilibrium States
NASA Astrophysics Data System (ADS)
De Carlo, Leonardo; Gabrielli, Davide
2017-09-01
We study the structure of stationary non-equilibrium states for interacting particle systems from a microscopic viewpoint. In particular we discuss two different discrete geometric constructions. We apply both of them to determine non reversible transition rates corresponding to a fixed invariant measure. The first one uses the equivalence of this problem with the construction of divergence free flows on the transition graph. Since divergence free flows are characterized by cyclic decompositions we can generate families of models from elementary cycles on the configuration space. The second construction is a functional discrete Hodge decomposition for translational covariant discrete vector fields. According to this, for example, the instantaneous current of any interacting particle system on a finite torus can be canonically decomposed in a gradient part, a circulation term and an harmonic component. All the three components are associated with functions on the configuration space. This decomposition is unique and constructive. The stationary condition can be interpreted as an orthogonality condition with respect to an harmonic discrete vector field and we use this decomposition to construct models having a fixed invariant measure.
NASA Astrophysics Data System (ADS)
Shao, X. H.; Zheng, S. J.; Chen, D.; Jin, Q. Q.; Peng, Z. Z.; Ma, X. L.
2016-07-01
The high hardness or yield strength of an alloy is known to benefit from the presence of small-scale precipitation, whose hardening effect is extensively applied in various engineering materials. Stability of the precipitates is of critical importance in maintaining the high performance of a material under mechanical loading. The long period stacking ordered (LPSO) structures play an important role in tuning the mechanical properties of an Mg-alloy. Here, we report deformation twinning induces decomposition of lamellar LPSO structures and their re-precipitation in an Mg-Zn-Y alloy. Using atomic resolution scanning transmission electron microscopy (STEM), we directly illustrate that the misfit dislocations at the interface between the lamellar LPSO structure and the deformation twin is corresponding to the decomposition and re-precipitation of LPSO structure, owing to dislocation effects on redistribution of Zn/Y atoms. This finding demonstrates that deformation twinning could destabilize complex precipitates. An occurrence of decomposition and re-precipitation, leading to a variant spatial distribution of the precipitates under plastic loading, may significantly affect the precipitation strengthening.
Shao, X. H.; Zheng, S. J.; Chen, D.; Jin, Q. Q.; Peng, Z. Z.; Ma, X. L.
2016-01-01
The high hardness or yield strength of an alloy is known to benefit from the presence of small-scale precipitation, whose hardening effect is extensively applied in various engineering materials. Stability of the precipitates is of critical importance in maintaining the high performance of a material under mechanical loading. The long period stacking ordered (LPSO) structures play an important role in tuning the mechanical properties of an Mg-alloy. Here, we report deformation twinning induces decomposition of lamellar LPSO structures and their re-precipitation in an Mg-Zn-Y alloy. Using atomic resolution scanning transmission electron microscopy (STEM), we directly illustrate that the misfit dislocations at the interface between the lamellar LPSO structure and the deformation twin is corresponding to the decomposition and re-precipitation of LPSO structure, owing to dislocation effects on redistribution of Zn/Y atoms. This finding demonstrates that deformation twinning could destabilize complex precipitates. An occurrence of decomposition and re-precipitation, leading to a variant spatial distribution of the precipitates under plastic loading, may significantly affect the precipitation strengthening. PMID:27435638
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staschus, K.
1985-01-01
In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2002-01-01
CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the construction of GOMS models have not yet come into general use.
Teodoro, Douglas; Lovis, Christian
2013-01-01
Background Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. Objective To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. Methods We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. Results The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. Conclusion The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends. PMID:23637796
Kumar, Nitin; Radin, Maxwell D.; Wood, Brandon C.; ...
2015-04-13
A viable Li/O 2 battery will require the development of stable electrolytes that do not continuously decompose during cell operation. In some recent experiments it is suggested that reactions occurring at the interface between the liquid electrolyte and the solid lithium peroxide (Li 2O 2) discharge phase are a major contributor to these instabilities. To clarify the mechanisms associated with these reactions, a variety of atomistic simulation techniques, classical Monte Carlo, van der Waals-augmented density functional theory, ab initio molecular dynamics, and various solvation models, are used to study the initial decomposition of the common electrolyte solvent, dimethoxyethane (DME), onmore » surfaces of Li 2O 2. Comparisons are made between the two predominant Li 2O 2 surface charge states by calculating decomposition pathways on peroxide-terminated (O 2 2–) and superoxide-terminated (O 2 1–) facets. For both terminations, DME decomposition proceeds exothermically via a two-step process comprised of hydrogen abstraction (H-abstraction) followed by nucleophilic attack. In the first step, abstracted H dissociates a surface O 2 dimer, and combines with a dissociated oxygen to form a hydroxide ion (OH –). In the remaining surface oxygen then attacks the DME, resulting in a DME fragment that is strongly bound to the Li 2O 2 surface. DME decomposition is predicted to be more exothermic on the peroxide facet; nevertheless, the rate of DME decomposition is faster on the superoxide termination. The impact of solvation (explicit vs implicit) and an applied electric field on the reaction energetics are investigated. Finally, our calculations suggest that surface-mediated electrolyte decomposition should out-pace liquid-phase processes such as solvent auto-oxidation by dissolved O 2.« less
Plants Regulate Soil Organic Matter Decomposition in Response to Sea Level Rise
NASA Astrophysics Data System (ADS)
Megonigal, P.; Mueller, P.; Jensen, K.
2014-12-01
Tidal wetlands have a large capacity for producing and storing organic matter, making their role in the global carbon budget disproportionate to their land area. Most of the organic matter stored in these systems is in soils where it contributes 2-5 times more to surface accretion than an equal mass of minerals. Soil organic matter (SOM) sequestration is the primary process by which tidal wetlands become perched high in the tidal frame, decreasing their vulnerability to accelerated sea level rise. Plant growth responses to sea level rise are well understood and represented in century-scale forecast models of soil surface elevation change. We understand far less about the response of soil organic matter decomposition to rapid sea level rise. Here we quantified the effects of sea level on SOM decomposition rates by exposing planted and unplanted tidal marsh monoliths to experimentally manipulated flood duration. The study was performed in a field-based mesocosm facility at the Smithsonian's Global Change Research Wetland. SOM decomposition rate was quantified as CO2 efflux, with plant- and SOM-derived CO2 separated with a two end-member δ13C-CO2 model. Despite the dogma that decomposition rates are inversely related to flooding, SOM mineralization was not sensitive to flood duration over a 35 cm range in soil surface elevation. However, decomposition rates were strongly and positively related to aboveground biomass (R2≥0.59, p≤0.01). We conclude that soil carbon loss through decomposition is driven by plant responses to sea level in this intensively studied tidal marsh. If this result applies more generally to tidal wetlands, it has important implications for modeling soil organic matter and surface elevation change in response to accelerated sea level rise.
Possibility of H2O2 decomposition in thin liquid films on Mars
NASA Astrophysics Data System (ADS)
Kereszturi, Akos; Gobi, Sandor
2014-11-01
In this work the pathways and possibilities of H2O2 decomposition on Mars in microscopic liquid interfacial water were analyzed by kinetic calculations. Thermal and photochemical driven decomposition, just like processes catalyzed by various metal oxides, is too slow compared to the annual duration while such microscopic liquid layers exist on Mars today, to produce substantial decomposition. The most effective analyzed process is catalyzed by Fe ions, which could decompose H2O2 under pH<4.5 with a half life of 1-2 days. This process might be important during volcanically influenced periods when sulfur release produces acidic pH, and rotational axis tilt change driven climatic changes also influence the volatile circulation and spatial occurrence just like the duration of thin liquid layer. Under current conditions, using the value of 200 K as the temperature in interfacial water (at the southern hemisphere), and applying Phoenix lander's wet chemistry laboratory results, the pH is not favorable for Fe mobility and this kind of decomposition. Despite current conditions (especially pH) being unfavorable for H2O2 decomposition, microscopic scale interfacial liquid water still might support the process. By the reaction called heterogeneous catalysis, without acidic pH and mobile Fe, but with minerals surfaces containing Fe decomposition of H2O2 with half life of 20 days can happen. This duration is still longer but not several orders than the existence of springtime interfacial liquid water on Mars today. This estimation is relevant for activation energy controlled reaction rates. The other main parameter that may influence the reaction rate is the diffusion speed. Although the available tests and theoretical calculations do not provide firm values for the diffusion speed in such a “2-dimensional” environment, using relevant estimations this parameter in the interfacial liquid layer is smaller than in bulk water. But the 20 days' duration mentioned above is still relevant, as the activation energy driven reaction rate is the main limiting factor in the decomposition and not the diffusion speed. The duration of dozen(s) days is still longer but not with orders of magnitude than the expected duration for the existence of springtime interfacial liquid water on Mars today. The results suggest such decomposition may happen today, however, because of our limited knowledge on chemical processes in thin interfacial liquid layers, this possibility waits for confirmation - and also points to the importance of conducting laboratory tests to validate the possible process. Although some tests were already realized for diffusion in an almost 2-dimensional liquid, the same is not true for activation energy, where only the value from the “normal” measurements was applied. Even if H2O2 decomposition is too slow today, the analysis of such a process is important, as under volcanic influence more effective decomposition might take place in thin interfacial liquids close to the climate of today if released sulfur produces pH<4.5. Large quantity and widespread occurrence of bulk liquid phase are not expected in the Amazonian period, but interfacial liquid water probably appeared regularly, and its locations, especially during volcanically active periods, might make certain sites than others more interesting for astrobiology with the lower concentration of oxidizing H2O2.
The Natural Helmholtz-Hodge Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatia, H.
nHHD is a C++ library to decompose a flow field into three components exhibiting specific types of behaviors. These components allow more targeted analysis of flow behavior and can be applied to a variety of application areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
Litter decay controlled by temperature, not soil properties, affecting future soil carbon.
Gregorich, Edward G; Janzen, Henry; Ellert, Benjamin H; Helgason, Bobbi L; Qian, Budong; Zebarth, Bernie J; Angers, Denis A; Beyaert, Ronald P; Drury, Craig F; Duguid, Scott D; May, William E; McConkey, Brian G; Dyck, Miles F
2017-04-01
Widespread global changes, including rising atmospheric CO 2 concentrations, climate warming and loss of biodiversity, are predicted for this century; all of these will affect terrestrial ecosystem processes like plant litter decomposition. Conversely, increased plant litter decomposition can have potential carbon-cycle feedbacks on atmospheric CO 2 levels, climate warming and biodiversity. But predicting litter decomposition is difficult because of many interacting factors related to the chemical, physical and biological properties of soil, as well as to climate and agricultural management practices. We applied 13 C-labelled plant litter to soil at ten sites spanning a 3500-km transect across the agricultural regions of Canada and measured its decomposition over five years. Despite large differences in soil type and climatic conditions, we found that the kinetics of litter decomposition were similar once the effect of temperature had been removed, indicating no measurable effect of soil properties. A two-pool exponential decay model expressing undecomposed carbon simply as a function of thermal time accurately described kinetics of decomposition. (R 2 = 0.94; RMSE = 0.0508). Soil properties such as texture, cation exchange capacity, pH and moisture, although very different among sites, had minimal discernible influence on decomposition kinetics. Using this kinetic model under different climate change scenarios, we projected that the time required to decompose 50% of the litter (i.e. the labile fractions) would be reduced by 1-4 months, whereas time required to decompose 90% of the litter (including recalcitrant fractions) would be reduced by 1 year in cooler sites to as much as 2 years in warmer sites. These findings confirm quantitatively the sensitivity of litter decomposition to temperature increases and demonstrate how climate change may constrain future soil carbon storage, an effect apparently not influenced by soil properties. © 2016 Her Majesty the Queen in Right of Canada. Global Change Biology. Published by 2016 John Wiley & Sons Ltd.
Cockle, Diane Lyn; Bell, Lynne S
2017-03-01
Little is known about the nature and trajectory of human decomposition in Canada. This study involved the examination of 96 retrospective police death investigation cases selected using the Canadian ViCLAS (Violent Crime Linkage Analysis System) and sudden death police databases. A classification system was designed and applied based on the latest visible stages of autolysis (stages 1-2), putrefaction (3-5) and skeletonisation (6-8) observed. The analysis of the progression of decomposition using time (Post Mortem Interval (PMI) in days) and temperature accumulated-degree-days (ADD) score found considerable variability during the putrefaction and skeletonisation phases, with poor predictability noted after stage 5 (post bloat). The visible progression of decomposition outdoors was characterized by a brown to black discolouration at stage 5 and remnant desiccated black tissue at stage 7. No bodies were totally skeletonised in under one year. Mummification of tissue was rare with earlier onset in winter as opposed to summer, considered likely due to lower seasonal humidity. It was found that neither ADD nor the PMI were significant dependent variables for the decomposition score with correlations of 53% for temperature and 41% for time. It took almost twice as much time and 1.5 times more temperature (ADD) for the set of cases exposed to cold and freezing temperatures (4°C or less) to reach putrefaction compared to the warm group. The amount of precipitation and/or clothing had a negligible impact on the advancement of decomposition, whereas the lack of sun exposure (full shade) had a small positive effect. This study found that the poor predictability of onset and the duration of late stage decomposition, combined with our limited understanding of the full range of variables which influence the speed of decomposition, makes PMI estimations for exposed terrestrial cases in Canada unreliable, but also calls in question PMI estimations elsewhere. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
An investigation on the modelling of kinetics of thermal decomposition of hazardous mercury wastes.
Busto, Yailen; M G Tack, Filip; Peralta, Luis M; Cabrera, Xiomara; Arteaga-Pérez, Luis E
2013-09-15
The kinetics of mercury removal from solid wastes generated by chlor-alkali plants were studied. The reaction order and model-free method with an isoconversional approach were used to estimate the kinetic parameters and reaction mechanism that apply to the thermal decomposition of hazardous mercury wastes. As a first approach to the understanding of thermal decomposition for this type of systems (poly-disperse and multi-component), a novel scheme of six reactions was proposed to represent the behaviour of mercury compounds in the solid matrix during the treatment. An integration-optimization algorithm was used in the screening of nine mechanistic models to develop kinetic expressions that best describe the process. The kinetic parameters were calculated by fitting each of these models to the experimental data. It was demonstrated that the D₁-diffusion mechanism appeared to govern the process at 250°C and high residence times, whereas at 450°C a combination of the diffusion mechanism (D₁) and the third order reaction mechanism (F3) fitted the kinetics of the conversions. The developed models can be applied in engineering calculations to dimension the installations and determine the optimal conditions to treat a mercury containing sludge. Copyright © 2013 Elsevier B.V. All rights reserved.
Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm
NASA Astrophysics Data System (ADS)
Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam
2017-04-01
The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.
Multiwavelet grading of prostate pathological images
NASA Astrophysics Data System (ADS)
Soltanian-Zadeh, Hamid; Jafari-Khouzani, Kourosh
2002-05-01
We have developed image analysis methods to automatically grade pathological images of prostate. The proposed method generates Gleason grades to images, where each image is assigned a grade between 1 and 5. This is done using features extracted from multiwavelet transformations. We extract energy and entropy features from submatrices obtained in the decomposition. Next, we apply a k-NN classifier to grade the image. To find optimal multiwavelet basis, preprocessing, and classifier, we use features extracted by different multiwavelets with either critically sampled preprocessing or repeated row preprocessing and different k-NN classifiers and compare their performances, evaluated by total misclassification rate (TMR). To evaluate sensitivity to noise, we add white Gaussian noise to images and compare the results (TMR's). We applied proposed methods to 100 images. We evaluated the first and second levels of decomposition using Geronimo, Hardin, and Massopust (GHM), Chui and Lian (CL), and Shen (SA4) multiwavelets. We also evaluated k-NN classifier for k=1,2,3,4,5. Experimental results illustrate that first level of decomposition is quite noisy. They also show that critically sampled preprocessing outperforms repeated row preprocessing and has less sensitivity to noise. Finally, comparison studies indicate that SA4 multiwavelet and k-NN classifier (k=1) generates optimal results (with smallest TMR of 3%).
Research on Multi-Temporal PolInSAR Modeling and Applications
NASA Astrophysics Data System (ADS)
Hong, Wen; Pottier, Eric; Chen, Erxue
2014-11-01
In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman-Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal Land P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.
Research on Multi-Temporal PolInSAR Modeling and Applications
NASA Astrophysics Data System (ADS)
Hong, Wen; Pottier, Eric; Chen, Erxue
2014-11-01
In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman- Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal L- and P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.
Mueller matrix imaging and analysis of cancerous cells
NASA Astrophysics Data System (ADS)
Fernández, A.; Fernández-Luna, J. L.; Moreno, F.; Saiz, J. M.
2017-08-01
Imaging polarimetry is a focus of increasing interest in diagnostic medicine because of its non-invasive nature and its potential for recognizing abnormal tissues. However, handling polarimetric images is not an easy task, and different intermediate steps have been proposed to introduce physical parameters that may be helpful to interpret results. In this work, transmission Mueller matrices (MM) corresponding to cancer cell samples have been experimentally obtained, and three different transformations have been applied: MM-Polar Decomposition, MM-Transformation and MM-Differential Decomposition. Special attention has been paid to diattenuation as a sensitive parameter to identify apoptosis processes induced by cisplatin and etoposide.
Theoretical study of gas hydrate decomposition kinetics--model development.
Windmeier, Christoph; Oellrich, Lothar R
2013-10-10
In order to provide an estimate of the order of magnitude of intrinsic gas hydrate dissolution and dissociation kinetics, the "Consecutive Desorption and Melting Model" (CDM) is developed by applying only theoretical considerations. The process of gas hydrate decomposition is assumed to comprise two consecutive and repetitive quasi chemical reaction steps. These are desorption of the guest molecule followed by local solid body melting. The individual kinetic steps are modeled according to the "Statistical Rate Theory of Interfacial Transport" and the Wilson-Frenkel approach. All missing required model parameters are directly linked to geometric considerations and a thermodynamic gas hydrate equilibrium model.
Signal evaluations using singular value decomposition for Thomson scattering diagnostics.
Tojo, H; Yamada, I; Yasuhara, R; Yatsuka, E; Funaba, H; Hatae, T; Hayashi, H; Itami, K
2014-11-01
This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (Te) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.
Signal evaluations using singular value decomposition for Thomson scattering diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tojo, H., E-mail: tojo.hiroshi@jaea.go.jp; Yatsuka, E.; Hatae, T.
2014-11-15
This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (T{sub e}) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry I.; Kasimov, Aslan R.
2018-03-01
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping
2004-08-12
Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/
food science. Matthew's research at NREL is focused on applying uncertainty quantification techniques . Research Interests Uncertainty quantification Computational multilinear algebra Approximation theory of and the Canonical Tensor Decomposition, Journal of Computational Physics (2017) Randomized Alternating
Peng, Cong; Chai, Liyuan; Tang, Chongjian; Min, Xiaobo; Song, Yuxia; Duan, Chengshan; Yu, Cheng
2017-01-01
Heavy metals and ammonia are difficult to remove from wastewater, as they easily combine into refractory complexes. The struvite formation method (SFM) was applied for the complex decomposition and simultaneous removal of heavy metal and ammonia. The results indicated that ammonia deprivation by SFM was the key factor leading to the decomposition of the copper-ammonia complex ion. Ammonia was separated from solution as crystalline struvite, and the copper mainly co-precipitated as copper hydroxide together with struvite. Hydrogen bonding and electrostatic attraction were considered to be the main surface interactions between struvite and copper hydroxide. Hydrogen bonding was concluded to be the key factor leading to the co-precipitation. In addition, incorporation of copper ions into the struvite crystal also occurred during the treatment process. Copyright © 2016. Published by Elsevier B.V.
Air trichloroethylene oxidation in a corona plasma-catalytic reactor
NASA Astrophysics Data System (ADS)
Masoomi-Godarzi, S.; Ranji-Burachaloo, H.; Khodadadi, A. A.; Vesali-Naseh, M.; Mortazavi, Y.
2014-08-01
The oxidative decomposition of trichloroethylene (TCE; 300 ppm) by non-thermal corona plasma was investigated in dry air at atmospheric pressure and room temperature, both in the absence and presence of catalysts including MnOx, CoOx. The catalysts were synthesized by a co-precipitation method. The morphology and structure of the catalysts were characterized by BET surface area measurement and Fourier Transform Infrared (FTIR) methods. Decomposition of TCE and distribution of products were evaluated by a gas chromatograph (GC) and an FTIR. In the absence of the catalyst, TCE removal is increased with increases in the applied voltage and current intensity. Higher TCE removal and CO2 selectivity is observed in presence of the corona and catalysts, as compared to those with the plasma alone. The results show that MnOx and CoOx catalysts can dissociate the in-plasma produced ozone to oxygen radicals, which enhances the TCE decomposition.
Tissue artifact removal from respiratory signals based on empirical mode decomposition.
Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-05-01
On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.
Violent societies: an application of orbital decomposition to the problem of human violence.
Spohn, M
2008-01-01
This study uses orbital decomposition to analyze the patterns of how governments lose their monopolies on violence, therefore allowing those societies to descend into violent states from which it is difficult to recover. The nonlinear progression by which the governing body loses its monopoly is based on the work of criminologist Lonnie Athens and applied from the individual to the societal scale. Four different kinds of societies are considered: Those where the governing body is both unwilling and unable to assert its monopoly on violence (former Yugoslavia); where it is unwilling (Peru); where it is unable (South Africa); and a smaller pocket of violent society within a larger, more stable one (Gujarat). In each instance, orbital decomposition turns up insights not apparent in the qualitative data or through linear statistical analysis, both about the nature of the descent into violence and about the progression itself.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
NASA Astrophysics Data System (ADS)
Iwabuchi, Masashi; Takahashi, Katsuyuki; Takaki, Koichi; Satta, Naoya
2016-07-01
The influence of sodium carbonate on the decomposition of formic acid by discharge inside bubbles in water was investigated experimentally. Oxygen or argon gases were injected into the water through a vertically positioned glass tube, in which the high-voltage wire electrode was placed to generate plasmas at low applied voltage. The concentration of formic acid was determined by ion chromatography. In the case of sodium carbonate additive, the pH increased owing to the decomposition of the formic acid. In the case of oxygen injection, the percentage of conversion of formic acid increased with increasing pH because the reaction rate of ozone with formic acid increased with increasing pH. In the case of argon injection, the percentage of conversion was not affected by the pH owing to the high rate loss of hydroxyl radicals.
Automatic single-image-based rain streaks removal via image decomposition.
Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang
2012-04-01
Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.
A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks
Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan
2015-01-01
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372
Hydrogen production from alcohol reforming in a microwave ‘tornado’-type plasma
NASA Astrophysics Data System (ADS)
Tatarova, E.; Bundaleska, N.; Dias, F. M.; Tsyganov, D.; Saavedra, R.; Ferreira, C. M.
2013-12-01
In this work, an experimental investigation of microwave plasma-assisted reforming of different alcohols is presented. A microwave (2.45 GHz) ‘tornado’-type plasma with a high-speed tangential gas injection (swirl) at atmospheric pressure is applied to decompose alcohol molecules, namely methanol, ethanol and propanol, and to produce hydrogen-rich gas. The reforming efficiency is investigated both in Ar and Ar+ water vapor plasma environments. The hydrogen yield dependence on the partial alcohol flux is analyzed. Mass spectrometry and Fourier transform infrared spectroscopy are used to detect the outlet gas products from the decomposition process. Hydrogen, carbon monoxide, carbon dioxide and solid carbon are the main decomposition by-products. A significant increase in the hydrogen production rate is observed with the addition of a small amount of water. Furthermore, optical emission spectroscopy is applied to detect the radiation emitted by the plasma and to estimate the gas temperature and electron density.
Image compression using singular value decomposition
NASA Astrophysics Data System (ADS)
Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.
2017-11-01
We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.
Grau, L; Laulagnet, B
2015-05-01
An analytical approach is investigated to model ground-plate interaction based on modal decomposition and the two-dimensional Fourier transform. A finite rectangular plate subjected to flexural vibration is coupled with the ground and modeled with the Kirchhoff hypothesis. A Navier equation represents the stratified ground, assumed infinite in the x- and y-directions and free at the top surface. To obtain an analytical solution, modal decomposition is applied to the structure and a Fourier Transform is applied to the ground. The result is a new tool for analyzing ground-plate interaction to resolve this problem: ground cross-modal impedance. It allows quantifying the added-stiffness, added-mass, and added-damping from the ground to the structure. Similarity with the parallel acoustic problem is highlighted. A comparison between the theory and the experiment shows good matching. Finally, specific cases are investigated, notably the influence of layer depth on plate vibration.
Li, Yuxing; Li, Yaan; Chen, Xiao; Yu, Jing
2017-12-26
As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN), research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD) combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC). First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs) using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD); then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD) combined with CC compared to EMD denoising, ensemble EMD (EEMD) denoising, VMD denoising and cubic VMD (3VMD) denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.
Augmented neural networks and problem structure-based heuristics for the bin-packing problem
NASA Astrophysics Data System (ADS)
Kasap, Nihat; Agarwal, Anurag
2012-08-01
In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.
Daily water level forecasting using wavelet decomposition and artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.
2015-01-01
Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin
2018-06-01
Mode mixing resulting from intermittent signals is an annoying problem associated with the local mean decomposition (LMD) method. Based on noise-assisted approach, ensemble local mean decomposition (ELMD) method alleviates the mode mixing issue of LMD to some degree. However, the product functions (PFs) produced by ELMD often contain considerable residual noise, and thus a relatively large number of ensemble trials are required to eliminate the residual noise. Furthermore, since different realizations of Gaussian white noise are added to the original signal, different trials may generate different number of PFs, making it difficult to take ensemble mean. In this paper, a novel method is proposed called complete ensemble local mean decomposition with adaptive noise (CELMDAN) to solve these two problems. The method adds a particular and adaptive noise at every decomposition stage for each trial. Moreover, a unique residue is obtained after separating each PF, and the obtained residue is used as input for the next stage. Two simulated signals are analyzed to illustrate the advantages of CELMDAN in comparison to ELMD and CEEMDAN. To further demonstrate the efficiency of CELMDAN, the method is applied to diagnose faults for rolling bearings in an experimental case and an engineering case. The diagnosis results indicate that CELMDAN can extract more fault characteristic information with less interference than ELMD.
Adserias-Garriga, Joe; Hernández, Marta; Quijada, Narciso M; Rodríguez Lázaro, David; Steadman, Dawnie; Garcia-Gil, Jesús
2017-09-01
Understanding human decomposition is critical for its use in postmortem interval (PMI) estimation, having a significant impact on forensic investigations. In recognition of the need to establish the scientific basis for PMI estimation, several studies on decomposition have been carried out in the last years. The aims of the present study were: (i) to identify soil microbiota communities involved in human decomposition through high-throughput sequencing (HTS) of DNA sequences from the different bacteria, (ii) to monitor quantitatively and qualitatively the decay of such signature species, and (iii) to describe succesional changes in bacterial populations from the early putrefaction state until skeletonization. Three donated individuals to the University of Tennessee FAC were studied. Soil samples around the body were taken from the placement of the donor until advanced decay/dry remains stage. Bacterial DNA extracts were obtained from the samples, HTS techniques were applied and bioinformatic data analysis was performed. The three cadavers showed similar overall successional changes. At the beginning of the decomposition process the soil microbiome consisted of diverse indigenous soil bacterial communities. As decomposition advanced, Firmicutes community abundance increased in the soil during the bloat stage. The growth curve of Firmicutes from human remains can be used to estimate time since death during Tennessee summer conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Carlton, Connor D; Mitchell, Samantha; Lewis, Patrick
2018-01-01
Over the past decade, Structure from Motion (SfM) has increasingly been used as a means of digital preservation and for documenting archaeological excavations, architecture, and cultural material. However, few studies have tapped the potential of using SfM to document and analyze taphonomic processes affecting burials for forensic sciences purposes. This project utilizes SfM models to elucidate specific post-depositional events that affected a series of three human cadavers deposited at the South East Texas Applied Forensic Science Facility (STAFS). The aim of this research was to test the ability for untrained researchers to employ spatial software and photogrammetry for data collection purposes. For a series of three months a single lens reflex (SLR) camera was used to capture a series of overlapping images at periodic stages in the decomposition process of each cadaver. These images are processed through photogrammetric software that creates a 3D model that can be measured, manipulated, and viewed. This project used photogrammetric and geospatial software to map changes in decomposition and movement of the body from original deposition points. Project results indicate SfM and GIS as a useful tool for documenting decomposition and taphonomic processes. Results indicate photogrammetry is an efficient, relatively simple, and affordable tool for the documentation of decomposition. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Elbeih, Ahmed; Abd-Elghany, Mohamed; Elshenawy, Tamer
2017-03-01
Vacuum stability test (VST) is mainly used to study compatibility and stability of energetic materials. In this work, VST has been investigated to study thermal decomposition kinetics of four cyclic nitramines, 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), cis-1,3,4,6-tetranitrooctahydroimidazo-[4,5-d]imidazole (BCHMX), 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (ε-HNIW, CL-20), bonded by polyurethane matrix based on hydroxyl terminated polybutadiene (HTPB). Model fitting and model free (isoconversional) methods have been applied to determine the decomposition kinetics from VST results. For comparison, the decomposition kinetics were determined isothermally by ignition delay technique and non-isothermally by Advanced Kinetics and Technology Solution (AKTS) software. The activation energies for thermolysis obtained by isoconversional method based on VST technique of RDX/HTPB, HMX/HTPB, BCHMX/HTPB and CL20/HTPB were 157.1, 203.1, 190.0 and 176.8 kJ mol-1 respectively. Model fitting method proved that the mechanism of thermal decomposition of BCHMX/HTPB is controlled by the nucleation model while all the other studied PBXs are controlled by the diffusion models. A linear relationship between the ignition temperatures and the activation energies was observed. BCHMX/HTPB is interesting new PBX in the research stage.
A study of photothermal laser ablation of various polymers on microsecond time scales.
Kappes, Ralf S; Schönfeld, Friedhelm; Li, Chen; Golriz, Ali A; Nagel, Matthias; Lippert, Thomas; Butt, Hans-Jürgen; Gutmann, Jochen S
2014-01-01
To analyze the photothermal ablation of polymers, we designed a temperature measurement setup based on spectral pyrometry. The setup allows to acquire 2D temperature distributions with 1 μm size and 1 μs time resolution and therefore the determination of the center temperature of a laser heating process. Finite element simulations were used to verify and understand the heat conversion and heat flow in the process. With this setup, the photothermal ablation of polystyrene, poly(α-methylstyrene), a polyimide and a triazene polymer was investigated. The thermal stability, the glass transition temperature Tg and the viscosity above Tg were governing the ablation process. Thermal decomposition for the applied laser pulse of about 10 μs started at temperatures similar to the start of decomposition in thermogravimetry. Furthermore, for polystyrene and poly(α-methylstyrene), both with a Tg in the range between room and decomposition temperature, ablation already occurred at temperatures well below the decomposition temperature, only at 30-40 K above Tg. The mechanism was photomechanical, i.e. a stress due to the thermal expansion of the polymer was responsible for ablation. Low molecular weight polymers showed differences in photomechanical ablation, corresponding to their lower Tg and lower viscosity above the glass transition. However, the difference in ablated volume was only significant at higher temperatures in the temperature regime for thermal decomposition at quasi-equilibrium time scales.
The relativistic theory of the chemical shift
NASA Astrophysics Data System (ADS)
Pyper, N. C.
1983-04-01
A relativistic theory of the NMR chemical shift for a closed-shell system is presented. The final expression for the shielding, derived by, applying two Gordon decompositions to the Dirac current operator, closely parallels the Ramsey non-relativistic result.
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
SDSS-IV MaNGA: bulge-disc decomposition of IFU data cubes (BUDDI)
NASA Astrophysics Data System (ADS)
Johnston, Evelyn J.; Häußler, Boris; Aragón-Salamanca, Alfonso; Merrifield, Michael R.; Bamford, Steven; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Fu, Hai; Law, David; Nitschelm, Christian; Thomas, Daniel; Roman Lopes, Alexandre; Wake, David; Yan, Renbin
2017-02-01
With the availability of large integral field unit (IFU) spectral surveys of nearby galaxies, there is now the potential to extract spectral information from across the bulges and discs of galaxies in a systematic way. This information can address questions such as how these components built up with time, how galaxies evolve and whether their evolution depends on other properties of the galaxy such as its mass or environment. We present bulge-disc decomposition of IFU data cubes (BUDDI), a new approach to fit the two-dimensional light profiles of galaxies as a function of wavelength to extract the spectral properties of these galaxies' discs and bulges. The fitting is carried out using GALFITM, a modified form of GALFIT which can fit multiwaveband images simultaneously. The benefit of this technique over traditional multiwaveband fits is that the stellar populations of each component can be constrained using knowledge over the whole image and spectrum available. The decomposition has been developed using commissioning data from the Sloan Digital Sky Survey-IV Mapping Nearby Galaxies at APO (MaNGA) survey with redshifts z < 0.14 and coverage of at least 1.5 effective radii for a spatial resolution of 2.5 arcsec full width at half-maximum and field of view of > 22 arcsec, but can be applied to any IFU data of a nearby galaxy with similar or better spatial resolution and coverage. We present an overview of the fitting process, the results from our tests, and we finish with example stellar population analyses of early-type galaxies from the MaNGA survey to give an indication of the scientific potential of applying bulge-disc decomposition to IFU data.
The potential phototoxicity of nano-scale ZnO induced by visible light on freshwater ecosystems.
Du, Jingjing; Qv, Mingxiang; Zhang, Yuyan; Yin, Xiaoyun; Wan, Ning; Zhang, Baozhong; Zhang, Hongzhong
2018-06-06
With the development of nanotechnology, nanomaterials have been widely applied in anti-bacterial coating, electronic device, and personal care products. NanoZnO is one of the most used materials and its ecotoxicity has been extensively studied. To explore the potential phototoxicity of nanoZnO induced by visible light, we conducted a long-term experiment on litter decomposition of Typha angustifolia leaves with assessment of fungal multifaceted natures. After 158 d exposure, the decomposition rate of leaf litter was decreased by nanoZnO but no additional effect by visible light. However, visible light enhanced the inhibitory effect of nanoZnO on fungal sporulation rate due to light-induced dissolution of nanoZnO. On the contrary, enzymes such as β-glucosidase, cellobiohydrolase, and leucine-aminopeptidase were significantly increased by the interaction of nanoZnO and visible light, which led to high efficiency of leaf carbon decomposition. Furthermore, different treatments and exposure time separated fungal community associated with litter decomposition. Therefore, the study provided the evidence of the contribution of visible light to nanoparticle phototoxicity at the ecosystem level. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Qiujie; Tan, Liu; Xu, Sen; Liu, Dabin; Min, Li
2018-04-01
Numerous accidents of emulsion explosive (EE) are attributed to uncontrolled thermal decomposition of ammonium nitrate emulsion (ANE, the intermediate of EE) and EE in large scale. In order to study the thermal decomposition characteristics of ANE and EE in different scales, a large-scale test of modified vented pipe test (MVPT), and two laboratory-scale tests of differential scanning calorimeter (DSC) and accelerating rate calorimeter (ARC) were applied in the present study. The scale effect and water effect both play an important role in the thermal stability of ANE and EE. The measured decomposition temperatures of ANE and EE in MVPT are 146°C and 144°C, respectively, much lower than those in DSC and ARC. As the size of the same sample in DSC, ARC, and MVPT successively increases, the onset temperatures decrease. In the same test, the measured onset temperature value of ANE is higher than that of EE. The water composition of the sample stabilizes the sample. The large-scale test of MVPT can provide information for the real-life operations. The large-scale operations have more risks, and continuous overheating should be avoided.
Kocurek, P; Kolomazník, K; Bařinová, M; Hendrych, J
2017-04-01
This paper deals with the problem of chromium recovery from chrome-tanned waste and thus with reducing the environmental impact of the leather industry. Chrome-tanned waste was transformed by alkaline enzymatic hydrolysis promoted by magnesium oxide into practically chromium-free, commercially applicable collagen hydrolysate and filtration cake containing a high portion of chromium. The crude and magnesium-deprived chromium cakes were subjected to a process of thermal decomposition at 650°C under oxygen-free conditions to reduce the amount of this waste and to study the effect of magnesium removal on the resulting products. Oxygen-free conditions were applied in order to prevent the oxidation of trivalent chromium into the hazardous hexavalent form. Thermal decomposition products from both crude and magnesium-deprived chrome cakes were characterized by high chromium content over 50%, which occurred as eskolaite (Cr 2 O 3 ) and magnesiochromite (MgCr 2 O 4 ) crystal phases, respectively. Thermal decomposition decreased the amount of chrome cake dry feed by 90%. Based on the performed experiments, a scheme for the total control of chromium in the leather industry was designed.
Zhu, Yizhou; He, Xingfeng; Mo, Yifei
2015-10-06
First-principles calculations were performed to investigate the electrochemical stability of lithium solid electrolyte materials in all-solid-state Li-ion batteries. The common solid electrolytes were found to have a limited electrochemical window. Our results suggest that the outstanding stability of the solid electrolyte materials is not thermodynamically intrinsic but is originated from kinetic stabilizations. The sluggish kinetics of the decomposition reactions cause a high overpotential leading to a nominally wide electrochemical window observed in many experiments. The decomposition products, similar to the solid-electrolyte-interphases, mitigate the extreme chemical potential from the electrodes and protect the solid electrolyte from further decompositions. With the aidmore » of the first-principles calculations, we revealed the passivation mechanism of these decomposition interphases and quantified the extensions of the electrochemical window from the interphases. We also found that the artificial coating layers applied at the solid electrolyte and electrode interfaces have a similar effect of passivating the solid electrolyte. Our newly gained understanding provided general principles for developing solid electrolyte materials with enhanced stability and for engineering interfaces in all-solid-state Li-ion batteries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, Matthew J.; Kellogg, Cyndi M.; Parker, Cyrena M.
Ammonium bifluoride (ABF, NH4F·HF) is a well-known reagent for converting metal oxides to fluorides and for its applications in breaking down minerals and ores in order to extract useful components. It has been more recently applied to the decomposition of inorganic matrices prior to elemental analysis. Herein, a sample decomposition method that employs molten ABF sample treatment in the initial step is systematically evaluated across a range of inorganic sample types: glass, quartz, zircon, soil, and pitchblende ore. Method performance is evaluated across the two variables: duration of molten ABF treatment and ABF reagent mass to sample mass ratio. Themore » degree of solubilization of these sample classes are compared to the fluoride stoichiometry that is theoretically necessary to enact complete fluorination of the sample types. Finally, the sample decomposition method is performed on several soil and pitchblende ore standard reference materials, after which elemental constituent analysis is performed by ICP-OES and ICP-MS. Elemental recoveries are compared to the certified values; results indicate good to excellent recoveries across a range of alkaline earth, rare earth, transition metal, and actinide elements.« less
Caffo, Brian S.; Crainiceanu, Ciprian M.; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H.; Bassett, Susan Spear; Pekar, James J.
2010-01-01
Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer’s disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer’s disease risk under a verbal paired associates task. We found a indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, that was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. PMID:20227508
Caffo, Brian S; Crainiceanu, Ciprian M; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H; Bassett, Susan Spear; Pekar, James J
2010-07-01
Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer's disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer's disease risk under a verbal paired associates task. We found an indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, which was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. Copyright (c) 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
Mason, H. E.; Uribe, E. C.; Shusterman, J. A.
2018-01-01
Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, H. E.; Uribe, E. C.; Shusterman, J. A.
Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.
ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations
NASA Astrophysics Data System (ADS)
Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil
2018-04-01
In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.
Decomposability and scalability in space-based observatory scheduling
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.
1992-01-01
In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.
Identification of particle-laden flow features from wavelet decomposition
NASA Astrophysics Data System (ADS)
Jackson, A.; Turnbull, B.
2017-12-01
A wavelet decomposition based technique is applied to air pressure data obtained from laboratory-scale powder snow avalanches. This technique is shown to be a powerful tool for identifying both repeatable and chaotic features at any frequency within the signal. Additionally, this technique is demonstrated to be a robust method for the removal of noise from the signal as well as being capable of removing other contaminants from the signal. Whilst powder snow avalanches are the focus of the experiments analysed here, the features identified can provide insight to other particle-laden gravity currents and the technique described is applicable to a wide variety of experimental signals.
Factor Analytic Approach to Transitive Text Mining using Medline Descriptors
NASA Astrophysics Data System (ADS)
Stegmann, J.; Grohmann, G.
Matrix decomposition methods were applied to examples of noninteractive literature sets sharing implicit relations. Document-by-term matrices were created from downloaded PubMed literature sets, the terms being the Medical Subject Headings (MeSH descriptors) assigned to the documents. The loadings of the factors derived from singular value or eigenvalue matrix decomposition were sorted according to absolute values and subsequently inspected for positions of terms relevant to the discovery of hidden connections. It was found that only a small number of factors had to be screened to find key terms in close neighbourhood, being separated by a small number of terms only.
A technique for plasma velocity-space cross-correlation
NASA Astrophysics Data System (ADS)
Mattingly, Sean; Skiff, Fred
2018-05-01
An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.
Analysis of typical fault-tolerant architectures using HARP
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl
1987-01-01
Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.
Benhammouda, Brahim
2016-01-01
Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.
Integrated structure/control design - Present methodology and future opportunities
NASA Technical Reports Server (NTRS)
Weisshaar, T. A.; Newsom, J. R.; Zeiler, T. A.; Gilbert, M. G.
1986-01-01
Attention is given to current methodology applied to the integration of the optimal design process for structures and controls. Multilevel linear decomposition techniques proved to be most effective in organizing the computational efforts necessary for ISCD (integrated structures and control design) tasks. With the development of large orbiting space structures and actively controlled, high performance aircraft, there will be more situations in which this concept can be applied.
In Silico Analyses of Substrate Interactions with Human Serum Paraoxonase 1
2008-01-01
substrate interactions of HuPON1 remains elusive. In this study, we apply homology modeling, docking, and molecular dynamic (MD) simulations to probe the...mod- eling; docking; molecular dynamics simulations ; binding free energy decomposition. 486 PROTEINS Published 2008 WILEY-LISS, INC. yThis article is a...apply homology modeling, docking, and molecular dynamic (MD) simulations to probe the binding interactions of HuPON1 with representative substrates. The
O'Dwyer, Jean; Walshe, Dylan; Byrne, Kenneth A
2018-03-01
Large quantities of wood products have historically been disposed of in landfills. The fate of this vast pool of carbon plays an important role in national carbon balances and accurate emission reporting. The Republic of Ireland, like many EU countries, utilises the 2006 Intergovernmental Panel on Climate Change (IPCC) guidelines for greenhouse gas reporting in the waste sector, which provides default factors for emissions estimation. For wood products, the release of carbon is directly proportional to the decomposition of the degradable organic carbon fraction of the product, for which the IPCC provides a value of 0.5 (50%). However, in situ analytic results of the decomposition rates of carbon in landfilled wood do not corroborate this figure; suggesting that carbon emissions are likely overestimated. To assess the impact of this overestimation on emission reporting, carbon decomposition values obtained from literature and the IPCC default factor were applied to the Irish wood fraction of landfilled waste for the years 1957-2016 and compared. Univariate analysis found a statistically significant difference between carbon (methane) emissions calculated using the IPCC default factor and decomposition factors from direct measurements for softwoods (F = 45.362, p = <.001), hardwoods (F = 20.691, p = <.001) and engineered wood products (U = 4.726, p = <.001). However, there was no significant difference between emissions calculated using only the in situ analytic decomposition factors, regardless of time in landfill, location or subsequently, climate. This suggests that methane emissions from the wood fraction of landfilled waste in Ireland could be drastically overestimated; potentially by a factor of 56. The results of this study highlight the implications of emission reporting at a lower tierand prompts further research into the decomposition of wood products in landfills at a national level. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Murat, M.
2017-12-01
Color-blended frequency decomposition is a seismic attribute that can be used to educe or draw out and visualize geomorphological features enabling a better understanding of reservoir architecture and connectivity for both exploration and field development planning. Color-blended frequency decomposition was applied to seismic data in several areas of interest in the Deepwater Gulf of Mexico. The objective was stratigraphic characterization to better define reservoir extent, highlight depositional features, identify thicker reservoir zones and examine potential connectivity issues due to stratigraphic variability. Frequency decomposition is a technique to analyze changes in seismic frequency caused by changes in the reservoir thickness, lithology and fluid content. This technique decomposes or separates the seismic frequency spectra into discrete bands of frequency limited seismic data using digital filters. The workflow consists of frequency (spectral) decomposition, RGB color blending of three frequency slices, and horizon or stratal slicing of the color blended frequency data for interpretation. Patterns were visualized and identified in the data that were not obvious on standard stacked seismic sections. These seismic patterns were interpreted and compared to known geomorphological patterns and their environment of deposition. From this we inferred the distribution of potential reservoir sand versus non-reservoir shale and even finer scale details such as the overall direction of the sediment transport and relative thickness. In exploratory areas, stratigraphic characterization from spectral decomposition is used for prospect risking and well planning. Where well control exists, we can validate the seismic observations and our interpretation and use the stratigraphic/geomorphological information to better inform decisions on the need for and placement of development wells.
Lumley decomposition of turbulent boundary layer at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Tutkun, Murat; George, William K.
2017-02-01
The decomposition proposed by Lumley in 1966 is applied to a high Reynolds number turbulent boundary layer. The experimental database was created by a hot-wire rake of 143 probes in the Laboratoire de Mécanique de Lille wind tunnel. The Reynolds numbers based on momentum thickness (Reθ) are 9800 and 19 100. Three-dimensional decomposition is performed, namely, proper orthogonal decomposition (POD) in the inhomogeneous and bounded wall-normal direction, Fourier decomposition in the homogeneous spanwise direction, and Fourier decomposition in time. The first POD modes in both cases carry nearly 50% of turbulence kinetic energy when the energy is integrated over Fourier dimensions. The eigenspectra always peak near zero frequency and most of the large scale, energy carrying features are found at the low end of the spectra. The spanwise Fourier mode which has the largest amount of energy is the first spanwise mode and its symmetrical pair. Pre-multiplied eigenspectra have only one distinct peak and it matches the secondary peak observed in the log-layer of pre-multiplied velocity spectra. Energy carrying modes obtained from the POD scale with outer scaling parameters. Full or partial reconstruction of turbulent velocity signal based only on energetic modes or non-energetic modes revealed the behaviour of urms in distinct regions across the boundary layer. When urms is based on energetic reconstruction, there exists (a) an exponential decay from near wall to log-layer, (b) a constant layer through the log-layer, and (c) another exponential decay in the outer region. The non-energetic reconstruction reveals that urms has (a) an exponential decay from the near-wall to the end of log-layer and (b) a constant layer in the outer region. Scaling of urms using the outer parameters is best when both energetic and non-energetic profiles are combined.
Ribéreau-Gayon, Agathe; Rando, Carolyn; Morgan, Ruth M; Carter, David O
2018-05-01
In the context of increased scrutiny of the methods in forensic sciences, it is essential to ensure that the approaches used in forensic taphonomy to measure decomposition and estimate the postmortem interval are underpinned by robust evidence-based data. Digital photographs are an important source of documentation in forensic taphonomic investigations but the suitability of the current approaches for photographs, rather than real-time remains, is poorly studied which can undermine accurate forensic conclusions. The present study aimed to investigate the suitability of 2D colour digital photographs for evaluating decomposition of exposed human analogues (Sus scrofa domesticus) in a tropical savanna environment (Hawaii), using two published scoring methods; Megyesi et al., 2005 and Keough et al., 2017. It was found that there were significant differences between the real-time and photograph decomposition scores when the Megyesi et al. method was used. However, the Keough et al. method applied to photographs reflected real-time decomposition more closely and thus appears more suitable to evaluate pig decomposition from 2D photographs. The findings indicate that the type of scoring method used has a significant impact on the ability to accurately evaluate the decomposition of exposed pig carcasses from photographs. It was further identified that photographic taphonomic analysis can reach high inter-observer reproducibility. These novel findings are of significant importance for the forensic sciences as they highlight the potential for high quality photograph coverage to provide useful complementary information for the forensic taphonomic investigation. New recommendations to develop robust transparent approaches adapted to photographs in forensic taphonomy are suggested based on these findings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Degradation of sulfamethazine by gamma irradiation in the presence of hydrogen peroxide.
Liu, Yuankun; Wang, Jianlong
2013-04-15
The gamma irradiation-induced degradation of sulfamethazine (SMT) in aqueous solution in the presence of hydrogen peroxide (H2O2) was investigated. The initial SMT concentration was 20mg/L and it was irradiated in the presence of extra H2O2 with initial concentration of 0, 10 and 30 mg/L. The results showed that gamma irradiation was effective for removing SMT in aqueous solution and its degradation conformed to the pseudo first-order kinetics under the applied conditions. When initial H2O2 concentration was in the range of 0-30 mg/L, higher concentration of H2O2 was more effective for the decomposition and mineralization of SMT. However, the removal of total organic carbon (TOC) was not as effective as that of SMT. Total nitrogen (TN) was not removed even at absorbed dose of 5 kGy, which was highest dose applied in this study. Major decomposition products of SMT, including degradation intermediates, organic acids and some inorganic ions were detected by high performance liquid chromatography (HPLC) and ion chromatography (IC). Sulfate (SO4(2-)), formic acid (HCOOH), acetic acid (CH3COOH), 4-aminophenol, 4-nitrophenol were identified in the irradiated solutions. Possible pathways for SMT decomposition by gamma irradiation in aqueous solution were proposed. Copyright © 2013 Elsevier B.V. All rights reserved.
Banaschik, Robert; Jablonowski, Helena; Bednarski, Patrick J; Kolb, Juergen F
2018-01-15
Seven recalcitrant pharmaceutical residues (diclofenac, 17α-ethinylestradiol, carbamazepine, ibuprofen, trimethoprim, diazepam, diatrizoate) were decomposed by pulsed corona plasma generated directly in water. The detailed degradation pathway was investigated for diclofenac and 21 intermediates could be identified in the degradation cascade. Hydroxyl radicals have been found primarily responsible for decomposition steps. By spin trap enhanced electron paramagnetic resonance spectroscopy (EPR), OH-adducts and superoxide anion radical adducts were detected and could be distinguished applying BMPO as a spin trap. The increase of concentrations of adducts follows qualitatively the increase of hydrogen peroxide concentrations. Hydrogen peroxide is eventually consumed in Fenton-like processes but the concentration is continuously increasing to about 2mM for a plasma treatment of 70min. Degradation of diclofenac is inversely following hydrogen peroxide concentrations. No qualitative differences between byproducts formed during plasma treatment or due to degradation via Fenton-induced processes were observed. Findings on degradation kinetics of diclofenac provide an instructive understanding of decomposition rates for recalcitrant pharmaceuticals with respect to their chemical structure. Accordingly, conclusions can be drawn for further development and a first risk assessment of the method which can also be applied towards other AOPs that rely on the generation of hydroxyl radicals. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of WRF-ROI system by incorporating eigen-decomposition
NASA Astrophysics Data System (ADS)
Kim, S.; Noh, N.; Song, H.; Lim, G.
2011-12-01
This study presents the development of WRF-ROI system, which is the implementation of Retrospective Optimal Interpolation (ROI) to the Weather Research and Forecasting model (WRF). ROI is a new data assimilation algorithm introduced by Song et al. (2009) and Song and Lim (2009). The formulation of ROI is similar with that of Optimal Interpolation (OI), but ROI iteratively assimilates an observation set at a post analysis time into a prior analysis, possibly providing the high quality reanalysis data. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In previous study, ROI method is applied to Lorenz 40-variable model (Lorenz, 1996) to validate the algorithm and to investigate the capability. It is therefore required to apply this ROI method into a more realistic and complicated model framework such as WRF. In this research, the reduced-rank formulation of ROI is used instead of a reduced-resolution method. The computational costs can be reduced due to the eigen-decomposition of background error covariance in the reduced-rank method. When single profile of observations is assimilated in the WRF-ROI system by incorporating eigen-decomposition, the analysis error tends to be reduced if compared with the background error. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation.
Relaxations to Sparse Optimization Problems and Applications
NASA Astrophysics Data System (ADS)
Skau, Erik West
Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we apply a maximum entropy model, guided by the social media data, to estimate the flooded regions during a 2013 flood in Boulder, CO and show that the results are comparable to those obtained using expert information.
Environmental fate of emamectin benzoate after tree micro injection of horse chestnut trees.
Burkhard, Rene; Binz, Heinz; Roux, Christian A; Brunner, Matthias; Ruesch, Othmar; Wyss, Peter
2015-02-01
Emamectin benzoate, an insecticide derived from the avermectin family of natural products, has a unique translocation behavior in trees when applied by tree micro injection (TMI), which can result in protection from insect pests (foliar and borers) for several years. Active ingredient imported into leaves was measured at the end of season in the fallen leaves of treated horse chestnut (Aesculus hippocastanum) trees. The dissipation of emamectin benzoate in these leaves seems to be biphasic and depends on the decomposition of the leaf. In compost piles, where decomposition of leaves was fastest, a cumulative emamectin benzoate degradation half-life time of 20 d was measured. In leaves immersed in water, where decomposition was much slower, the degradation half-life time was 94 d, and in leaves left on the ground in contact with soil, where decomposition was slowest, the degradation half-life time was 212 d. The biphasic decline and the correlation with leaf decomposition might be attributed to an extensive sorption of emamectin benzoate residues to leaf macromolecules. This may also explain why earthworms ingesting leaves from injected trees take up very little emamectin benzoate and excrete it with the feces. Furthermore, no emamectin benzoate was found in water containing decomposing leaves from injected trees. It is concluded, that emamectin benzoate present in abscised leaves from horse chestnut trees injected with the insecticide is not available to nontarget organisms present in soil or water bodies. Published 2014 SETAC.
Kreutzweiser, D P; Gringorten, J L; Thomas, D R; Butcher, J T
1996-04-01
Epilithic microbial communities were colonized on leaf disks and exposed to commercial preparations of Bacillus thuringiensis var. kurstaki (Btk) in aquatic microcosms. Responses in terms of microbial respiration, bacterial cell density, protozoan density, and microbial decomposition activity were measured. Test concentrations for treatments with Dipel 64AF and Dipel 8AF in microcosms were the expected environmental concentration (EEC) of 20 IU/ml, 100x the EEC, and 1000x the EEC. Bacterial cell density in the biofilm of leaf disks was significantly increased at concentrations as low as the EEC. There were no concomitant alterations in protozoan density. Microbial respiration was significantly increased, and decomposition activity was significantly decreased, but only at the artificially high concentration of 1000x the EEC. This effect was attributed to the spore-crystal component rather than formulation ingredients. Microbial decomposition of leaf material was also determined in outdoor stream channels treated at concentrations ranging from the EEC to 100x the EEC. Although there tended to be reduced decomposition activity in treated channels, there were no significant differences in mass loss of leaf material between treated and control channels. Various regression, classification, and ordination procedures were applied to the experimental data, and none indicated significant treatment effects. These results from laboratory and controlled field experiments indicate that contamination of watercourses with Btk is unlikely to result in significant adverse effects on microbial community function in terms of detrital decomposition.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
Hybridization of decomposition and local search for multiobjective optimization.
Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto
2014-10-01
Combining ideas from evolutionary algorithms, decomposition approaches, and Pareto local search, this paper suggests a simple yet efficient memetic algorithm for combinatorial multiobjective optimization problems: memetic algorithm based on decomposition (MOMAD). It decomposes a combinatorial multiobjective problem into a number of single objective optimization problems using an aggregation method. MOMAD evolves three populations: 1) population P(L) for recording the current solution to each subproblem; 2) population P(P) for storing starting solutions for Pareto local search; and 3) an external population P(E) for maintaining all the nondominated solutions found so far during the search. A problem-specific single objective heuristic can be applied to these subproblems to initialize the three populations. At each generation, a Pareto local search method is first applied to search a neighborhood of each solution in P(P) to update P(L) and P(E). Then a single objective local search is applied to each perturbed solution in P(L) for improving P(L) and P(E), and reinitializing P(P). The procedure is repeated until a stopping condition is met. MOMAD provides a generic hybrid multiobjective algorithmic framework in which problem specific knowledge, well developed single objective local search and heuristics and Pareto local search methods can be hybridized. It is a population based iterative method and thus an anytime algorithm. Extensive experiments have been conducted in this paper to study MOMAD and compare it with some other state-of-the-art algorithms on the multiobjective traveling salesman problem and the multiobjective knapsack problem. The experimental results show that our proposed algorithm outperforms or performs similarly to the best so far heuristics on these two problems.
Ozerov, Ivan V; Lezhnina, Ksenia V; Izumchenko, Evgeny; Artemov, Artem V; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N; Labat, Ivan; West, Michael D; Buzdin, Anton; Cantor, Charles R; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-11-16
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy.
Energy decomposition analysis for exciplexes using absolutely localized molecular orbitals
NASA Astrophysics Data System (ADS)
Ge, Qinghui; Mao, Yuezhi; Head-Gordon, Martin
2018-02-01
An energy decomposition analysis (EDA) scheme is developed for understanding the intermolecular interaction involving molecules in their excited states. The EDA utilizes absolutely localized molecular orbitals to define intermediate states and is compatible with excited state methods based on linear response theory such as configuration interaction singles and time-dependent density functional theory. The shift in excitation energy when an excited molecule interacts with the environment is decomposed into frozen, polarization, and charge transfer contributions, and the frozen term can be further separated into Pauli repulsion and electrostatics. These terms can be added to their counterparts obtained from the ground state EDA to form a decomposition of the total interaction energy. The EDA scheme is applied to study a variety of systems, including some model systems to demonstrate the correct behavior of all the proposed energy components as well as more realistic systems such as hydrogen-bonding complexes (e.g., formamide-water, pyridine/pyrimidine-water) and halide (F-, Cl-)-water clusters that involve charge-transfer-to-solvent excitations.
Thermal stability of tungsten sub-nitride thin film prepared by reactive magnetron sputtering
NASA Astrophysics Data System (ADS)
Zhang, X. X.; Wu, Y. Z.; Mu, B.; Qiao, L.; Li, W. X.; Li, J. J.; Wang, P.
2017-03-01
Tungsten sub-nitride thin films deposited on silicon samples by reactive magnetron sputtering were used as a model system to study the phase stability and microstructural evolution during thermal treatments. XRD, SEM&FIB, XPS, RBS and TDS were applied to investigate the stability of tungsten nitride films after heating up to 1473 K in vacuum. At the given experimental parameters a 920 nm thick crystalline film with a tungsten and nitrogen stoichiometry of 2:1 were achieved. The results showed that no phase and microstructure change occurred due to W2N film annealing in vacuum up to 973 K. Heating up to 1073 K led to a partial decomposition of the W2N phase and the formation of a W enrichment layer at the surface. Increasing the annealing time at the same temperature, the further decomposition of the W2N phase was negligible. The complete decomposition of W2N film happened as the temperature reached up to 1473 K.
Phase decomposition and ordering in Ni-11.3 at.% Ti studied with atom probe tomography.
Al-Kassab, T; Kompatscher, M; Kirchheim, R; Kostorz, G; Schönfeld, B
2014-09-01
The decomposition behavior of Ni-rich Ni-Ti was reassessed using Tomographic Atom Probe (TAP) and Laser Assisted Wide Angle Tomographic Atom Probe. Single crystalline specimens of Ni-11.3 at.% Ti were investigated, the states selected from the decomposition path were the metastable γ″ and γ' states introduced on the basis of small-angle neutron scattering (SANS) and the two-phase model for evaluation. The composition values of the precipitates in these states could not be confirmed by APT data as the interface of the ordered precipitates may not be neglected. The present results rather suggest to apply a three-phase model for the interpretation of SANS measurements, in which the width of the interface remains nearly unchanged and the L12 structure close to 3:1 stoichiometry is maintained in the core of the precipitates from the γ″ to the γ' state. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ozerov, Ivan V.; Lezhnina, Ksenia V.; Izumchenko, Evgeny; Artemov, Artem V.; Medintsev, Sergey; Vanhaelen, Quentin; Aliper, Alexander; Vijg, Jan; Osipov, Andreyan N.; Labat, Ivan; West, Michael D.; Buzdin, Anton; Cantor, Charles R.; Nikolsky, Yuri; Borisov, Nikolay; Irincheeva, Irina; Khokhlovich, Edward; Sidransky, David; Camargo, Miguel Luiz; Zhavoronkov, Alex
2016-01-01
Signalling pathway activation analysis is a powerful approach for extracting biologically relevant features from large-scale transcriptomic and proteomic data. However, modern pathway-based methods often fail to provide stable pathway signatures of a specific phenotype or reliable disease biomarkers. In the present study, we introduce the in silico Pathway Activation Network Decomposition Analysis (iPANDA) as a scalable robust method for biomarker identification using gene expression data. The iPANDA method combines precalculated gene coexpression data with gene importance factors based on the degree of differential gene expression and pathway topology decomposition for obtaining pathway activation scores. Using Microarray Analysis Quality Control (MAQC) data sets and pretreatment data on Taxol-based neoadjuvant breast cancer therapy from multiple sources, we demonstrate that iPANDA provides significant noise reduction in transcriptomic data and identifies highly robust sets of biologically relevant pathway signatures. We successfully apply iPANDA for stratifying breast cancer patients according to their sensitivity to neoadjuvant therapy. PMID:27848968
Palm vein recognition based on directional empirical mode decomposition
NASA Astrophysics Data System (ADS)
Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei
2014-04-01
Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.
Thermal desorption of dimethyl methylphosphonate from MoO 3
Head, Ashley R.; Tang, Xin; Hicks, Zachary; ...
2017-03-03
Organophosphonates are used as chemical warfare agents, pesticides, and corrosion inhibitors. New materials for the sorption, detection, and decomposition of these compounds are urgently needed. To facilitate materials and application innovation, a better understanding of the interactions between organophosphonates and surfaces is required. To this end, we have used diffuse reflectance infrared Fourier transform spectroscopy to investigate the adsorption geometry of dimethyl methylphosphonate (DMMP) on MoO 3, a material used in chemical warfare agent filtration devices. We further applied ambient pressure X-ray photoelectron spectroscopy and temperature programmed desorption to study the adsorption and desorption of DMMP. While DMMP adsorbs intactmore » on MoO 3, desorption depends on coverage and partial pressure. At low coverages under UHV conditions, the intact adsorption is reversible. Decomposition occurs with higher coverages, as evidenced by PCH x and PO x decomposition products on the MoO 3 surface. Heating under mTorr partial pressures of DMMP results in product accumulation.« less
An intelligent decomposition approach for efficient design of non-hierarchic systems
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.
1992-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex systems into subsystem modules which are coupled through transference of output data. The implementation of such a decomposition approach assumes the ability exists to determine what subsystems and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is quite often an extremely complex task which may be beyond human ability to efficiently achieve. Further, in optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the optimal solution. The ability to determine 'weak' versus 'strong' coupling strengths would aid the designer in deciding which couplings could be permanently removed from consideration or which could be temporarily suspended so as to achieve computational savings with minimal loss in solution accuracy. An approach that uses normalized sensitivities to quantify coupling strengths is presented. The approach is applied to a coupled system composed of analysis equations for verification purposes.
Multiple-component Decomposition from Millimeter Single-channel Data
NASA Astrophysics Data System (ADS)
Rodríguez-Montoya, Iván; Sánchez-Argüelles, David; Aretxaga, Itziar; Bertone, Emanuele; Chávez-Dagostino, Miguel; Hughes, David H.; Montaña, Alfredo; Wilson, Grant W.; Zeballos, Milagros
2018-03-01
We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data, we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the Aztronomical Thermal Emission Camera/ASTE survey of the Great Observatories Origins Deep Survey–South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise ratio, and minimizing information loss. In particular, GOODS-S is decomposed into four independent physical components: one of them is the already-known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.
Characteristic eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1991-01-01
The proper orthogonal decomposition technique (Lumley's decomposition) is applied to the turbulent flow in a channel to extract coherent structures by decomposing the velocity field into characteristic eddies with random coefficients. In the homogeneous spatial directions, a generaliztion of the shot-noise expansion is used to determine the characteristic eddies. In this expansion, the Fourier coefficients of the characteristic eddy cannot be obtained from the second-order statistics. Three different techniques are used to determine the phases of these coefficients. They are based on: (1) the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Results from these three techniques are found to be similar in most respects. The implications of these techniques and the shot-noise expansion are discussed. The dominant eddy is found to contribute as much as 76 percent to the turbulent kinetic energy. In both 2D and 3D, the characteristic eddies consist of an ejection region straddled by streamwise vortices that leave the wall in the very short streamwise distance of about 100 wall units.
FCDECOMP: decomposition of metabolic networks based on flux coupling relations.
Rezvan, Abolfazl; Marashi, Sayed-Amir; Eslahchi, Changiz
2014-10-01
A metabolic network model provides a computational framework to study the metabolism of a cell at the system level. Due to their large sizes and complexity, rational decomposition of these networks into subsystems is a strategy to obtain better insight into the metabolic functions. Additionally, decomposing metabolic networks paves the way to use computational methods that will be otherwise very slow when run on the original genome-scale network. In the present study, we propose FCDECOMP decomposition method based on flux coupling relations (FCRs) between pairs of reaction fluxes. This approach utilizes a genetic algorithm (GA) to obtain subsystems that can be analyzed in isolation, i.e. without considering the reactions of the original network in the analysis. Therefore, we propose that our method is useful for discovering biologically meaningful modules in metabolic networks. As a case study, we show that when this method is applied to the metabolic networks of barley seeds and yeast, the modules are in good agreement with the biological compartments of these networks.
Application of FTIR spectroscopy to study the thermal stability of magnesium aspartate-arginine
NASA Astrophysics Data System (ADS)
Hacura, Andrzej; Marcoin, Wacława; Pasterny, Karol
2012-03-01
FTIR spectroscopy has been applied to study the thermal stability of magnesium aspartatearginine. An attempt has been made, using theoretically predicted IR spectra, to relate the changes in the experimental spectra with the decomposition process of the studied magnesium complex.
Bond Order Conservation Strategies in Catalysis Applied to the NH 3 Decomposition Reaction
Yu, Liang; Abild-Pedersen, Frank
2016-12-14
On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less
NASA Astrophysics Data System (ADS)
Raghupathy, Arun; Ghia, Karman; Ghia, Urmila
2008-11-01
Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.
Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2013-01-01
Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.
Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2012-01-01
Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.
MARS-MD: rejection based image domain material decomposition
NASA Astrophysics Data System (ADS)
Bateman, C. J.; Knight, D.; Brandwacht, B.; McMahon, J.; Healy, J.; Panta, R.; Aamir, R.; Rajendran, K.; Moghiseh, M.; Ramyar, M.; Rundle, D.; Bennett, J.; de Ruiter, N.; Smithies, D.; Bell, S. T.; Doesburg, R.; Chernoglazov, A.; Mandalika, V. B. H.; Walsh, M.; Shamshad, M.; Anjomrouz, M.; Atharifard, A.; Vanden Broeke, L.; Bheesette, S.; Kirkbride, T.; Anderson, N. G.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Butler, A. P. H.; Butler, P. H.
2018-05-01
This paper outlines image domain material decomposition algorithms that have been routinely used in MARS spectral CT systems. These algorithms (known collectively as MARS-MD) are based on a pragmatic heuristic for solving the under-determined problem where there are more materials than energy bins. This heuristic contains three parts: (1) splitting the problem into a number of possible sub-problems, each containing fewer materials; (2) solving each sub-problem; and (3) applying rejection criteria to eliminate all but one sub-problem's solution. An advantage of this process is that different constraints can be applied to each sub-problem if necessary. In addition, the result of this process is that solutions will be sparse in the material domain, which reduces crossover of signal between material images. Two algorithms based on this process are presented: the Segmentation variant, which uses segmented material classes to define each sub-problem; and the Angular Rejection variant, which defines the rejection criteria using the angle between reconstructed attenuation vectors.
Torres, Ana M; Lopez, Jose J; Pueo, Basilio; Cobos, Maximo
2013-04-01
Plane-wave decomposition (PWD) methods using microphone arrays have been shown to be a very useful tool within the applied acoustics community for their multiple applications in room acoustics analysis and synthesis. While many theoretical aspects of PWD have been previously addressed in the literature, the practical advantages of the PWD method to assess the acoustic behavior of real rooms have been barely explored so far. In this paper, the PWD method is employed to analyze the sound field inside a selected set of real rooms having a well-defined purpose. To this end, a circular microphone array is used to capture and process a number of impulse responses at different spatial positions, providing angle-dependent data for both direct and reflected wavefronts. The detection of reflected plane waves is performed by means of image processing techniques applied over the raw array response data and over the PWD data, showing the usefulness of image-processing-based methods for room acoustics analysis.
NASA Astrophysics Data System (ADS)
Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.
1997-02-01
We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.
NASA Astrophysics Data System (ADS)
Chong, Song-Ho; Ham, Sihyun
2011-07-01
We report the development of an atomic decomposition method of the protein solvation free energy in water, which ascribes global change in the solvation free energy to local changes in protein conformation as well as in hydration structure. So far, empirical decomposition analyses based on simple continuum solvation models have prevailed in the study of protein-protein interactions, protein-ligand interactions, as well as in developing scoring functions for computer-aided drug design. However, the use of continuum solvation model suffers serious drawbacks since it yields the protein free energy landscape which is quite different from that of the explicit solvent model and since it does not properly account for the non-polar hydrophobic effects which play a crucial role in biological processes in water. Herein, we develop an exact and general decomposition method of the solvation free energy that overcomes these hindrances. We then apply this method to elucidate the molecular origin for the solvation free energy change upon the conformational transitions of 42-residue amyloid-beta protein (Aβ42) in water, whose aggregation has been implicated as a primary cause of Alzheimer's disease. We address why Aβ42 protein exhibits a great propensity to aggregate when transferred from organic phase to aqueous phase.
NASA Astrophysics Data System (ADS)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So
2017-09-01
A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.
Degradation of organic pollutants in Mediterranean forest soils amended with sewage sludge.
Francisca Gomez-Rico, M; Font, Rafael; Vera, Jose; Fuentes, David; Disante, Karen; Cortina, Jordi
2008-05-01
The degradation of two groups of organic pollutants in three different Mediterranean forest soils amended with sewage sludge was studied for nine months. The sewage sludge produced by a domestic water treatment plant was applied to soils developed from limestone, marl and sandstone, showing contrasting alkalinity and texture. The compounds analysed were: linear alkylbenzene sulphonates (LAS) with a 10-13 carbon alkylic chain, and nonylphenolic compounds, including nonylphenol (NP) and nonylphenol ethoxylates with one and two ethoxy groups (NP1EO+NP2EO). These compounds were studied because they frequently exceed the limits proposed for sludge application to land in Europe. After nine months, LAS decomposition was 86-96%, and NP+NP1EO+NP2EO decomposition was 61-84%, which can be considered high. Temporal trends in LAS and NP+NP1EO+NP2EO decomposition were similar, and the concentrations of both types of compounds were highly correlated. The decomposition rates were higher in the period of 6-9 months (summer period) than in the period 0-6 months (winter+spring period) for total LAS and NP+NP1EO+NP2EO. Differences in decay rates with regard to soil type were not significant. The average values of decay rates found are similar to those observed in agricultural soils.
Quantitative analysis of microbial biomass yield in aerobic bioreactor.
Watanabe, Osamu; Isoda, Satoru
2013-12-01
We have studied the integrated model of reaction rate equations with thermal energy balance in aerobic bioreactor for food waste decomposition and showed that the integrated model has the capability both of monitoring microbial activity in real time and of analyzing biodegradation kinetics and thermal-hydrodynamic properties. On the other hand, concerning microbial metabolism, it was known that balancing catabolic reactions with anabolic reactions in terms of energy and electron flow provides stoichiometric metabolic reactions and enables the estimation of microbial biomass yield (stoichiometric reaction model). We have studied a method for estimating real-time microbial biomass yield in the bioreactor during food waste decomposition by combining the integrated model with the stoichiometric reaction model. As a result, it was found that the time course of microbial biomass yield in the bioreactor during decomposition can be evaluated using the operational data of the bioreactor (weight of input food waste and bed temperature) by the combined model. The combined model can be applied to manage a food waste decomposition not only for controlling system operation to keep microbial activity stable, but also for producing value-added products such as compost on optimum condition. Copyright © 2013 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
Metallo-organic decomposition films
NASA Technical Reports Server (NTRS)
Gallagher, B. D.
1985-01-01
A summary of metallo-organic deposition (MOD) films for solar cells was presented. The MOD materials are metal ions compounded with organic radicals. The technology is evolving quickly for solar cell metallization. Silver compounds, especially silver neodecanoate, were developed which can be applied by thick-film screening, ink-jet printing, spin-on, spray, or dip methods. Some of the advantages of MOD are: high uniform metal content, lower firing temperatures, decomposition without leaving a carbon deposit or toxic materials, and a film that is stable under ambient conditions. Molecular design criteria were explained along with compounds formulated to date, and the accompanying reactions for these compounds. Phase stability and the other experimental and analytic results of MOD films were presented.
Multi-level basis selection of wavelet packet decomposition tree for heart sound classification.
Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Abdullah Ramaiah, Asri Ranga
2013-10-01
Wavelet packet transform decomposes a signal into a set of orthonormal bases (nodes) and provides opportunities to select an appropriate set of these bases for feature extraction. In this paper, multi-level basis selection (MLBS) is proposed to preserve the most informative bases of a wavelet packet decomposition tree through removing less informative bases by applying three exclusion criteria: frequency range, noise frequency, and energy threshold. MLBS achieved an accuracy of 97.56% for classifying normal heart sound, aortic stenosis, mitral regurgitation, and aortic regurgitation. MLBS is a promising basis selection to be suggested for signals with a small range of frequencies. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Ukeiley, L.; Varghese, M.; Glauser, M.; Valentine, D.
1991-01-01
A 'lobed mixer' device that enhances mixing through secondary flows and streamwise vorticity is presently studied within the framework of multifractal-measures theory, in order to deepen understanding of velocity time trace data gathered on its operation. Proper orthogonal decomposition-based knowledge of coherent structures has been applied to obtain the generalized fractal dimensions and multifractal spectrum of several proper eigenmodes for data samples of the velocity time traces; this constitutes a marked departure from previous multifractal theory applications to self-similar cascades. In certain cases, a single dimension may suffice to capture the entire spectrum of scaling exponents for the velocity time trace.
Decomposition of algebraic sets and applications to weak centers of cubic systems
NASA Astrophysics Data System (ADS)
Chen, Xingwu; Zhang, Weinian
2009-10-01
There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.
Coating for components requiring hydrogen peroxide compatibility
NASA Technical Reports Server (NTRS)
Yousefiani, Ali (Inventor)
2010-01-01
The present invention provides a heretofore-unknown use for zirconium nitride as a hydrogen peroxide compatible protective coating that was discovered to be useful to protect components that catalyze the decomposition of hydrogen peroxide or corrode when exposed to hydrogen peroxide. A zirconium nitride coating of the invention may be applied to a variety of substrates (e.g., metals) using art-recognized techniques, such as plasma vapor deposition. The present invention further provides components and articles of manufacture having hydrogen peroxide compatibility, particularly components for use in aerospace and industrial manufacturing applications. The zirconium nitride barrier coating of the invention provides protection from corrosion by reaction with hydrogen peroxide, as well as prevention of hydrogen peroxide decomposition.
Domain decomposition methods for nonconforming finite element spaces of Lagrange-type
NASA Technical Reports Server (NTRS)
Cowsar, Lawrence C.
1993-01-01
In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.
Basic research in evolution and ecology enhances forensics.
Tomberlin, Jeffery K; Benbow, M Eric; Tarone, Aaron M; Mohr, Rachel M
2011-02-01
In 2009, the National Research Council recommended that the forensic sciences strengthen their grounding in basic empirical research to mitigate against criticism and improve accuracy and reliability. For DNA-based identification, this goal was achieved under the guidance of the population genetics community. This effort resulted in DNA analysis becoming the 'gold standard' of the forensic sciences. Elsewhere, we proposed a framework for streamlining research in decomposition ecology, which promotes quantitative approaches to collecting and applying data to forensic investigations involving decomposing human remains. To extend the ecological aspects of this approach, this review focuses on forensic entomology, although the framework can be extended to other areas of decomposition. Published by Elsevier Ltd.
Recursive inverse factorization.
Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N
2008-03-14
A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.
Analyzing Transient Turbuelnce in a Stenosed Carotid Artery by Proper Orthogonal Decomposition
NASA Astrophysics Data System (ADS)
Grinberg, Leopold; Yakhot, Alexander; Karniadakis, George
2009-11-01
High resolution 3D simulation (involving 100M degrees of freedom) were employed to study transient turbulent flow in a carotid arterial bifurcation with a stenosed internal carotid artery (ICA). In the performed simulation an intermittent (in space and time) laminar-turbulent-laminar regime was observed. The simulation reveals the mechanism of the onset of turbulent flow in the stenosed ICA where the narrowing in the artery generates a strong jet flow. Time- and space-window Proper Orthogonal Decomposition (POD) was applied to quantify the different flow regimes in the occluded artery. A simplified version of the POD analysis that utilizes 2D slices only - more appropriate in the clinical setting - was also investigated.
Analysis and Prediction of Sea Ice Evolution using Koopman Mode Decomposition Techniques
Koopman Mode Analysis was newly applied to southern hemisphere sea ice concentration data. The resulting Koopman modes from analysis of both the...southern and northern hemisphere sea ice concentration data shows geographical regions where sea ice coverage has decreased over multiyear time scales.
Decompositions of Multiattribute Utility Functions Based on Convex Dependence.
1982-03-01
School of Business, 200E, BEB Decision Research University of Texas at Austin 1201 Oak Street Austin, Texas 78712 Eugene, Oregon 97401 Professor Norman ...Stephen M. Robinson Dept. of Industrial Engineering Dr. Richard D. Smallwood Univ. of Wisconsin, Madison Applied Decision Analysis, Inc. 1513 University
Environmental Fate of Emamectin Benzoate After Tree Micro Injection of Horse Chestnut Trees
Burkhard, Rene; Binz, Heinz; Roux, Christian A; Brunner, Matthias; Ruesch, Othmar; Wyss, Peter
2015-01-01
Emamectin benzoate, an insecticide derived from the avermectin family of natural products, has a unique translocation behavior in trees when applied by tree micro injection (TMI), which can result in protection from insect pests (foliar and borers) for several years. Active ingredient imported into leaves was measured at the end of season in the fallen leaves of treated horse chestnut (Aesculus hippocastanum) trees. The dissipation of emamectin benzoate in these leaves seems to be biphasic and depends on the decomposition of the leaf. In compost piles, where decomposition of leaves was fastest, a cumulative emamectin benzoate degradation half-life time of 20 d was measured. In leaves immersed in water, where decomposition was much slower, the degradation half-life time was 94 d, and in leaves left on the ground in contact with soil, where decomposition was slowest, the degradation half-life time was 212 d. The biphasic decline and the correlation with leaf decomposition might be attributed to an extensive sorption of emamectin benzoate residues to leaf macromolecules. This may also explain why earthworms ingesting leaves from injected trees take up very little emamectin benzoate and excrete it with the feces. Furthermore, no emamectin benzoate was found in water containing decomposing leaves from injected trees. It is concluded, that emamectin benzoate present in abscised leaves from horse chestnut trees injected with the insecticide is not available to nontarget organisms present in soil or water bodies. Environ Toxicol Chem 2014;9999:1–6. © 2014 The Authors. Published 2014 SETAC PMID:25363584
Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier
2017-02-15
The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399
Frelat, Romain; Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A; Möllmann, Christian
2017-01-01
Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs.
Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of a wastewater treatment facility along a river. Data was collected over 14-60 days, and several seasons. The power spectral densit...
Soil amendments yield persisting effects on the microbial communities--a 7-year study
USDA-ARS?s Scientific Manuscript database
Soil microbial communities are sensitive to carbon amendments and largely control the decomposition and accumulation of soil organic matter. In this study, we evaluated whether the type of carbon amendment applied to wheat-cropped or fallow soil imparted lasting effects on the microbial community w...
Task-discriminative space-by-time factorization of muscle activity
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2015-01-01
Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment. PMID:26217213
Task-discriminative space-by-time factorization of muscle activity.
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2015-01-01
Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment.
Low-dimensional modelling of a transient cylinder wake using double proper orthogonal decomposition
NASA Astrophysics Data System (ADS)
Siegel, Stefan G.; Seidel, J.?Rgen; Fagley, Casey; Luchtenburg, D. M.; Cohen, Kelly; McLaughlin, Thomas
For the systematic development of feedback flow controllers, a numerical model that captures the dynamic behaviour of the flow field to be controlled is required. This poses a particular challenge for flow fields where the dynamic behaviour is nonlinear, and the governing equations cannot easily be solved in closed form. This has led to many versions of low-dimensional modelling techniques, which we extend in this work to represent better the impact of actuation on the flow. For the benchmark problem of a circular cylinder wake in the laminar regime, we introduce a novel extension to the proper orthogonal decomposition (POD) procedure that facilitates mode construction from transient data sets. We demonstrate the performance of this new decomposition by applying it to a data set from the development of the limit cycle oscillation of a circular cylinder wake simulation as well as an ensemble of transient forced simulation results. The modes obtained from this decomposition, which we refer to as the double POD (DPOD) method, correctly track the changes of the spatial modes both during the evolution of the limit cycle and when forcing is applied by transverse translation of the cylinder. The mode amplitudes, which are obtained by projecting the original data sets onto the truncated DPOD modes, can be used to construct a dynamic mathematical model of the wake that accurately predicts the wake flow dynamics within the lock-in region at low forcing amplitudes. This low-dimensional model, derived using nonlinear artificial neural network based system identification methods, is robust and accurate and can be used to simulate the dynamic behaviour of the wake flow. We demonstrate this ability not just for unforced and open-loop forced data, but also for a feedback-controlled simulation that leads to a 90% reduction in lift fluctuations. This indicates the possibility of constructing accurate dynamic low-dimensional models for feedback control by using unforced and transient forced data only.
Fire affects root decomposition, soil food web structure, and carbon flow in tallgrass prairie
NASA Astrophysics Data System (ADS)
Shaw, E. Ashley; Denef, Karolien; Milano de Tomasel, Cecilia; Cotrufo, M. Francesca; Wall, Diana H.
2016-05-01
Root litter decomposition is a major component of carbon (C) cycling in grasslands, where it provides energy and nutrients for soil microbes and fauna. This is especially important in grasslands where fire is common and removes aboveground litter accumulation. In this study, we investigated whether fire affects root decomposition and C flow through the belowground food web. In a greenhouse experiment, we applied 13C-enriched big bluestem (Andropogon gerardii) root litter to intact tallgrass prairie soil cores collected from annually burned (AB) and infrequently burned (IB) treatments at the Konza Prairie Long Term Ecological Research (LTER) site. Incorporation of 13C into microbial phospholipid fatty acids and nematode trophic groups was measured on six occasions during a 180-day decomposition study to determine how C was translocated through the soil food web. Results showed significantly different soil communities between treatments and higher microbial abundance for IB. Root decomposition occurred rapidly and was significantly greater for AB. Microbes and their nematode consumers immediately assimilated root litter C in both treatments. Root litter C was preferentially incorporated in a few groups of microbes and nematodes, but depended on burn treatment: fungi, Gram-negative bacteria, Gram-positive bacteria, and fungivore nematodes for AB and only omnivore nematodes for IB. The overall microbial pool of root-litter-derived C significantly increased over time but was not significantly different between burn treatments. The nematode pool of root-litter-derived C also significantly increased over time, and was significantly higher for the AB treatment at 35 and 90 days after litter addition. In conclusion, the C flow from root litter to microbes to nematodes is not only measurable but also significant, indicating that higher nematode trophic levels are critical components of C flow during root decomposition, which, in turn, is significantly affected by fire. Not only does fire affect the soil community and root decomposition, but the lower microbial abundance, greater root turnover, and the increased incorporation of root litter C by microbes and nematodes for AB suggests that annual burning increases root-litter-derived C flow through the soil food web of the tallgrass prairie.
Molnár, Sándor; López, Inmaculada; Gámez, Manuel; Garay, József
2016-03-01
The paper is aimed at a methodological development in biological pest control. The considered one pest two-agent system is modelled as a verticum-type system. Originally, linear verticum-type systems were introduced by one of the authors for modelling certain industrial systems. These systems are hierarchically composed of linear subsystems such that a part of the state variables of each subsystem affect the dynamics of the next subsystem. Recently, verticum-type system models have been applied to population ecology as well, which required the extension of the concept a verticum-type system to the nonlinear case. In the present paper the general concepts and technics of nonlinear verticum-type control systems are used to obtain biological control strategies in a two-agent system. For the illustration of this verticum-type control, these tools of mathematical systems theory are applied to a dynamic model of interactions between the egg and larvae populations of the sugarcane borer (Diatraea saccharalis) and its parasitoids: the egg parasitoid Trichogramma galloi and the larvae parasitoid Cotesia flavipes. In this application a key role is played by the concept of controllability, which means that it is possible to steer the system to an equilibrium in given time. In addition to a usual linearization, the basic idea is a decomposition of the control of the whole system into the control of the subsystems, making use of the verticum structure of the population system. The main aim of this study is to show several advantages of the verticum (or decomposition) approach over the classical control theoretical model (without decomposition). For example, in the case of verticum control the pest larval density decreases below the critical threshold value much quicker than without decomposition. Furthermore, it is also shown that the verticum approach may be better even in terms of cost effectiveness. The presented optimal control methodology also turned out to be an efficient tool for the "in silico" analysis of the cost-effectiveness of different biocontrol strategies, e.g. by answering the question how far it is cost-effective to speed up the reduction of the pest larvae density, or along which trajectory this reduction should be carried out. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Pan, Kuan Lun; Chen, Mei Chung; Yu, Sheng Jen; Yan, Shaw Yi; Chang, Moo Been
2016-06-01
Direct decompositions of nitric oxide (NO) by La0.7Ce0.3SrNiO4, La0.4Ba0.4Ce0.2SrNiO4, and Pr0.4Ba0.4Ce0.2SrNiO4 are experimentally investigated, and the catalysts are tested with different operating parameters to evaluate their activities. Experimental results indicate that the physical and chemical properties of La0.7Ce0.3SrNiO4 are significantly improved by doping with Ba and partial substitution with Pr. NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4 are 32% and 68%, respectively, at 400 °C with He as carrier gas. As the temperature is increased to 600 °C, NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4, respectively, reach 100% with the inlet NO concentration of 1000 ppm while the space velocity is fixed at 8000 hr(-1). Effects of O2, H2O(g), and CO2 contents and space velocity on NO decomposition are also explored. The results indicate that NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4, respectively, are slightly reduced as space velocity is increased from 8000 to 20,000 hr(-1) at 500 °C. In addition, the activities of both catalysts (La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4) for NO decomposition are slightly reduced in the presence of 5% O2, 5% CO2, or 5% H2O(g). For durability test, with the space velocity of 8000 hr(-1) and operating temperature of 600 °C, high N2 yield is maintained throughout the durability test of 60 hr, revealing the long-term stability of Pr0.4Ba0.4Ce0.2SrNiO4 for NO decomposition. Overall, Pr0.4Ba0.4Ce0.2SrNiO4 shows good catalytic activity for NO decomposition. Nitrous oxide (NO) not only causes adverse environmental effects such as acid rain, photochemical smog, and deterioration of visibility and water quality, but also harms human lungs and respiratory system. Pervoskite-type catalysts, including La0.7Ce0.3SrNiO4, La0.4Ba0.4Ce0.2SrNiO4, and Pr0.4Ba0.4Ce0.2SrNiO4, are applied for direct NO decomposition. The results show that NO decomposition can be enhanced as La0.7Ce0.3SrNiO4 is substituted with Ba and/or Pr. At 600 °C, NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4 reach 100%, demonstrating high activity and good potential for direct NO decomposition. Effects of O2, H2O(g), and CO2 contents on catalytic activities are also evaluated and discussed.
Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition
NASA Astrophysics Data System (ADS)
Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso
2005-04-01
Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
A robust watermarking scheme using lifting wavelet transform and singular value decomposition
NASA Astrophysics Data System (ADS)
Bhardwaj, Anuj; Verma, Deval; Verma, Vivek Singh
2017-01-01
The present paper proposes a robust image watermarking scheme using lifting wavelet transform (LWT) and singular value decomposition (SVD). Second level LWT is applied on host/cover image to decompose into different subbands. SVD is used to obtain singular values of watermark image and then these singular values are updated with the singular values of LH2 subband. The algorithm is tested on a number of benchmark images and it is found that the present algorithm is robust against different geometric and image processing operations. A comparison of the proposed scheme is performed with other existing schemes and observed that the present scheme is better not only in terms of robustness but also in terms of imperceptibility.
Multi-variants synthesis of Petri nets for FPGA devices
NASA Astrophysics Data System (ADS)
Bukowiec, Arkadiusz; Doligalski, Michał
2015-09-01
There is presented new method of synthesis of application specific logic controllers for FPGA devices. The specification of control algorithm is made with use of control interpreted Petri net (PT type). It allows specifying parallel processes in easy way. The Petri net is decomposed into state-machine type subnets. In this case, each subnet represents one parallel process. For this purpose there are applied algorithms of coloring of Petri nets. There are presented two approaches of such decomposition: with doublers of macroplaces or with one global wait place. Next, subnets are implemented into two-level logic circuit of the controller. The levels of logic circuit are obtained as a result of its architectural decomposition. The first level combinational circuit is responsible for generation of next places and second level decoder is responsible for generation output symbols. There are worked out two variants of such circuits: with one shared operational memory or with many flexible distributed memories as a decoder. Variants of Petri net decomposition and structures of logic circuits can be combined together without any restrictions. It leads to existence of four variants of multi-variants synthesis.
Bernaldo de Quirós, Yara; Seewald, Jeffrey S.; Sylva, Sean P.; Greer, Bill; Niemeyer, Misty; Bogomolni, Andrea L.; Moore, Michael J.
2013-01-01
Gas bubbles in marine mammals entangled and drowned in gillnets have been previously described by computed tomography, gross examination and histopathology. The absence of bacteria or autolytic changes in the tissues of those animals suggested that the gas was produced peri- or post-mortem by a fast decompression, probably by quickly hauling animals entangled in the net at depth to the surface. Gas composition analysis and gas scoring are two new diagnostic tools available to distinguish gas embolisms from putrefaction gases. With this goal, these methods have been successfully applied to pathological studies of marine mammals. In this study, we characterized the flux and composition of the gas bubbles from bycaught marine mammals in anchored sink gillnets and bottom otter trawls. We compared these data with marine mammals stranded on Cape Cod, MA, USA. Fresh animals or with moderate decomposition (decomposition scores of 2 and 3) were prioritized. Results showed that bycaught animals presented with significantly higher gas scores than stranded animals. Gas composition analyses indicate that gas was formed by decompression, confirming the decompression hypothesis. PMID:24367623
Production of furfural from palm oil empty fruit bunches: kinetic model comparation
NASA Astrophysics Data System (ADS)
Panjaitan, J. R. H.; Monica, S.; Gozan, M.
2017-05-01
Furfural is a chemical compound that can be applied to pharmaceuticals, cosmetics, resins and cleaning compound which can be produced by acid hydrolysis of biomass. Indonesia’s demand for furfural in 2010 reached 790 tons that still imported mostly 72% from China. In this study, reaction kinetic models of furfural production from oil palm empty fruit bunches with submitting acid catalyst at the beginning of the experiment will be determine. Kinetic data will be obtained from hydrolysis of empty oil palm bunches using sulfuric acid catalyst 3% at temperature 170°C, 180°C and 190°C for 20 minutes. From this study, the kinetic model to describe the production of furfural is the kinetic model where generally hydrolysis reaction with an acid catalyst in hemicellulose and furfural will produce the same decomposition product which is formic acid with different reaction pathways. The activation energy obtained for the formation of furfural, the formation of decomposition products from furfural and the formation of decomposition products from hemicellulose is 8.240 kJ/mol, 19.912 kJ/mol and -39.267 kJ / mol.
Signal decomposition for surrogate modeling of a constrained ultrasonic design space
NASA Astrophysics Data System (ADS)
Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.
2018-04-01
The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B
2011-01-01
We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less
Ferroelectric based catalysis: Switchable surface chemistry
NASA Astrophysics Data System (ADS)
Kakekhani, Arvin; Ismail-Beigi, Sohrab
2015-03-01
We describe a new class of catalysts that uses an epitaxial monolayer of a transition metal oxide on a ferroelectric substrate. The ferroelectric polarization switches the surface chemistry between strongly adsorptive and strongly desorptive regimes, circumventing difficulties encountered on non-switchable catalytic surfaces where the Sabatier principle dictates a moderate surface-molecule interaction strength. This method is general and can, in principle, be applied to many reactions, and for each case the choice of the transition oxide monolayer can be optimized. Here, as a specific example, we show how simultaneous NOx direct decomposition (into N2 and O2) and CO oxidation can be achieved efficiently on CrO2 terminated PbTiO3, while circumventing oxygen (and sulfur) poisoning issues. One should note that NOx direct decomposition has been an open challenge in automotive emission control industry. Our method can expand the range of catalytically active elements to those which are not conventionally considered for catalysis and which are more economical, e.g., Cr (for NOx direct decomposition and CO oxidation) instead of canonical precious metal catalysts. Primary support from Toyota Motor Engineering and Manufacturing, North America, Inc.
Biney, Paul O; Gyamerah, Michael; Shen, Jiacheng; Menezes, Bruna
2015-03-01
A new multi-stage kinetic model has been developed for TGA pyrolysis of arundo, corn stover, sawdust and switch grass that accounts for the initial biomass weight (W0). The biomass were decomposed in a nitrogen atmosphere from 23°C to 900°C in a TGA at a single 20°C/min ramp rate in contrast with the isoconversion technique. The decomposition was divided into multiple stages based on the absolute local minimum values of conversion derivative, (dx/dT), obtained from DTG curves. This resulted in three decomposition stages for arundo, corn stover and sawdust and four stages for switch grass. A linearized multi-stage model was applied to the TGA data for each stage to determine the pre-exponential factor, activation energy, and reaction order. The activation energies ranged from 54.7 to 60.9 kJ/mol, 62.9 to 108.7 kJ/mol, and 18.4 to 257.9 kJ/mol for the first, second and the third decomposition stages respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yang, Haixuan; Seoighe, Cathal
2016-01-01
Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.
DeMAID/GA an Enhanced Design Manager's Aid for Intelligent Decomposition
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1996-01-01
Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial public release of DeMAID in 1989, much research has been done in the areas of decomposition, concurrent engineering, parallel processing, and process management; many new tools and techniques have emerged. Based on these recent research and development efforts, numerous enhancements have been added to DeMAID to further aid the design manager in saving both cost and time in a design cycle. The key enhancement, a genetic algorithm (GA), will be available in the next public release called DeMAID/GA. The GA sequences the design processes to minimize the cost and time in converging a solution. The major enhancements in the upgrade of DeMAID to DeMAID/GA are discussed in this paper. A sample conceptual design project is used to show how these enhancements can be applied to improve the design cycle.
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Yang, Li; Sun, Rui; Hase, William L
2011-11-08
In a previous study (J. Chem. Phys.2008, 129, 094701) it was shown that for a large molecule, with a total energy much greater than its barrier for decomposition and whose vibrational modes are harmonic oscillators, the expressions for the classical Rice-Ramsperger-Kassel-Marcus (RRKM) (i.e., RRK) and classical transition-state theory (TST) rate constants become equivalent. Using this relationship, a molecule's unimolecular rate constants versus temperature may be determined from chemical dynamics simulations of microcanonical ensembles for the molecule at different total energies. The simulation identifies the molecule's unimolecular pathways and their Arrhenius parameters. In the work presented here, this approach is used to study the thermal decomposition of CH3-NH-CH═CH-CH3, an important constituent in the polymer of cross-linked epoxy resins. Direct dynamics simulations, at the MP2/6-31+G* level of theory, were used to investigate the decomposition of microcanonical ensembles for this molecule. The Arrhenius A and Ea parameters determined from the direct dynamics simulation are in very good agreement with the TST Arrhenius parameters for the MP2/6-31+G* potential energy surface. The simulation method applied here may be particularly useful for large molecules with a multitude of decomposition pathways and whose transition states may be difficult to determine and have structures that are not readily obvious.
Szlavik, Robert B
2016-02-01
The characterization of peripheral nerve fiber distributions, in terms of diameter or velocity, is of clinical significance because information associated with these distributions can be utilized in the differential diagnosis of peripheral neuropathies. Electro-diagnostic techniques can be applied to the investigation of peripheral neuropathies and can yield valuable diagnostic information while being minimally invasive. Nerve conduction velocity studies are single parameter tests that yield no detailed information regarding the characteristics of the population of nerve fibers that contribute to the compound-evoked potential. Decomposition of the compound-evoked potential, such that the velocity or diameter distribution of the contributing nerve fibers may be determined, is necessary if information regarding the population of contributing nerve fibers is to be ascertained from the electro-diagnostic study. In this work, a perturbation-based decomposition of compound-evoked potentials is proposed that facilitates determination of the fiber diameter distribution associated with the compound-evoked potential. The decomposition is based on representing the single fiber-evoked potential, associated with each diameter class, as being perturbed by contributions, of varying degree, from all the other diameter class single fiber-evoked potentials. The resultant estimator of the contributing nerve fiber diameter distribution is valid for relatively large separations in diameter classes. It is also useful in situations where the separation between diameter classes is small and the concomitant single fiber-evoked potentials are not orthogonal.
A Novel Framework Based on FastICA for High Density Surface EMG Decomposition
Chen, Maoqi; Zhou, Ping
2015-01-01
This study presents a progressive FastICA peel-off (PFP) framework for high density surface electromyogram (EMG) decomposition. The novel framework is based on a shift-invariant model for describing surface EMG. The decomposition process can be viewed as progressively expanding the set of motor unit spike trains, which is primarily based on FastICA. To overcome the local convergence of FastICA, a “peel off” strategy (i.e. removal of the estimated motor unit action potential (MUAP) trains from the previous step) is used to mitigate the effects of the already identified motor units, so more motor units can be extracted. Moreover, a constrained FastICA is applied to assess the extracted spike trains and correct possible erroneous or missed spikes. These procedures work together to improve the decomposition performance. The proposed framework was validated using simulated surface EMG signals with different motor unit numbers (30, 70, 91) and signal to noise ratios (SNRs) (20, 10, 0 dB). The results demonstrated relatively large numbers of extracted motor units and high accuracies (high F1-scores). The framework was also tested with 111 trials of 64-channel electrode array experimental surface EMG signals during the first dorsal interosseous (FDI) muscle contraction at different intensities. On average 14.1 ± 5.0 motor units were identified from each trial of experimental surface EMG signals. PMID:25775496
Carbon dioxide emissions from the electricity sector in major countries: a decomposition analysis.
Li, Xiangzheng; Liao, Hua; Du, Yun-Fei; Wang, Ce; Wang, Jin-Wei; Liu, Yanan
2018-03-01
The electric power sector is one of the primary sources of CO 2 emissions. Analyzing the influential factors that result in CO 2 emissions from the power sector would provide valuable information to reduce the world's CO 2 emissions. Herein, we applied the Divisia decomposition method to analyze the influential factors for CO 2 emissions from the power sector from 11 countries, which account for 67% of the world's emissions from 1990 to 2013. We decompose the influential factors for CO 2 emissions into seven areas: the emission coefficient, energy intensity, the share of electricity generation, the share of thermal power generation, electricity intensity, economic activity, and population. The decomposition analysis results show that economic activity, population, and the emission coefficient have positive roles in increasing CO 2 emissions, and their contribution rates are 119, 23.9, and 0.5%, respectively. Energy intensity, electricity intensity, the share of electricity generation, and the share of thermal power generation curb CO 2 emissions and their contribution rates are 17.2, 15.7, 7.7, and 2.8%, respectively. Through decomposition analysis for each country, economic activity and population are the major factors responsible for increasing CO 2 emissions from the power sector. However, the other factors from developed countries can offset the growth in CO 2 emissions due to economic activities.
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...
2017-03-07
Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib
Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less
Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of an oil and gas wastewater treatment facility along a river. Data was collected over 14-60 days. The power spectral density was us...
USDA-ARS?s Scientific Manuscript database
Fast pyrolysis is rapid heating in the absence of oxygen resulting in decomposition of organic material. When applied to biomass, it produces bio-oil, bio-char and gas. The Agricultural Research Service (ARS) of the USDA has studied fluidized-bed fast pyrolysis of several bimoass including perenni...
Measurement System for Energetic Materials Decomposition
2015-01-05
scholarships or fellowships for further studies in science, mathematics, engineering or technology fields: Student Metrics This section only applies to...science, mathematics, engineering, or technology fields: The number of undergraduates funded by your agreement who graduated during this period and...will continue to pursue a graduate or Ph.D. degree in science, mathematics, engineering, or technology fields
Shpotyuk, O; Bujňáková, Z; Baláž, P; Ingram, A; Shpotyuk, Y
2016-01-05
Positron annihilation lifetime spectroscopy was applied to characterize free-volume structure of polyvinylpyrrolidone used as nonionic stabilizer in the production of many nanocomposite pharmaceuticals. The polymer samples with an average molecular weight of 40,000 g mol(-1) were pelletized in a single-punch tableting machine under an applied pressure of 0.7 GPa. Strong mixing in channels of positron and positronium trapping were revealed in the polyvinylpyrrolidone pellets. The positron lifetime spectra accumulated under normal measuring statistics were analysed in terms of unconstrained three- and four-term decomposition, the latter being also realized under fixed 0.125 ns lifetime proper to para-positronium self-annihilation in a vacuum. It was shown that average positron lifetime extracted from each decomposition was primary defined by long-lived ortho-positronium component. The positron lifetime spectra treated within unconstrained three-term fitting were in obvious preference, giving third positron lifetime dominated by ortho-positronium pick-off annihilation in a polymer matrix. This fitting procedure was most meaningful, when analysing expected positron trapping sites in polyvinylpyrrolidone-stabilized nanocomposite pharmaceuticals. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Feng, Tao
2013-04-01
Climate change is not only reflected in the changes in annual means of climate variables but also in the changes in their annual cycles (seasonality), especially in the regions outside the tropics. Changes in the timing of seasons, especially the wind season, have gained much attention worldwide in recent decade or so. We introduce long-range correlated surrogate data to Ensemble Empirical Mode Decomposition method, which represent the statistic characteristics of data better than white noise. The new method we named Ensemble Empirical Mode Decomposition with Long-range Correlated noise (EEMD-LRC) and applied to 600 station wind speed records. This new method is applied to investigate the trend in the amplitude of the annual cycle of China's daily mean surface wind speed for the period 1971-2005. The amplitude of seasonal variation decrease significantly in the past half century over China, which can be well explained by Annual Cycle component from EEMD-LRC. Furthermore, the phase change of annual cycle lead to strongly shorten of wind season in spring, and corresponding with strong windy day frequency change over Northern China.
NASA Astrophysics Data System (ADS)
Herrera, I.; Herrera, G. S.
2015-12-01
Most geophysical systems are macroscopic physical systems. The behavior prediction of such systems is carried out by means of computational models whose basic models are partial differential equations (PDEs) [1]. Due to the enormous size of the discretized version of such PDEs it is necessary to apply highly parallelized super-computers. For them, at present, the most efficient software is based on non-overlapping domain decomposition methods (DDM). However, a limiting feature of the present state-of-the-art techniques is due to the kind of discretizations used in them. Recently, I. Herrera and co-workers using 'non-overlapping discretizations' have produced the DVS-Software which overcomes this limitation [2]. The DVS-software can be applied to a great variety of geophysical problems and achieves very high parallel efficiencies (90%, or so [3]). It is therefore very suitable for effectively applying the most advanced parallel supercomputers available at present. In a parallel talk, in this AGU Fall Meeting, Graciela Herrera Z. will present how this software is being applied to advance MOD-FLOW. Key Words: Parallel Software for Geophysics, High Performance Computing, HPC, Parallel Computing, Domain Decomposition Methods (DDM)REFERENCES [1]. Herrera Ismael and George F. Pinder, Mathematical Modelling in Science and Engineering: An axiomatic approach", John Wiley, 243p., 2012. [2]. Herrera, I., de la Cruz L.M. and Rosas-Medina A. "Non Overlapping Discretization Methods for Partial, Differential Equations". NUMER METH PART D E, 30: 1427-1454, 2014, DOI 10.1002/num 21852. (Open source) [3]. Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
DFT investigations of hydrogen storage materials
NASA Astrophysics Data System (ADS)
Wang, Gang
Hydrogen serves as a promising new energy source having no pollution and abundant on earth. However the most difficult problem of applying hydrogen is to store it effectively and safely, which is smartly resolved by attempting to keep hydrogen in some metal hydrides to reach a high hydrogen density in a safe way. There are several promising metal hydrides, the thermodynamic and chemical properties of which are to be investigated in this dissertation. Sodium alanate (NaAlH4) is one of the promising metal hydrides with high hydrogen storage capacity around 7.4 wt. % and relatively low decomposition temperature of around 100 °C with proper catalyst. Sodium hydride is a product of the decomposition of NaAlH4 that may affect the dynamics of NaAlH4. The two materials with oxygen contamination such as OH- may influence the kinetics of the dehydriding/rehydriding processes. Thus the solid solubility of OH - groups (NaOH) in NaAlH4 and NaH is studied theoretically by DFT calculations. Magnesium boride [Mg(BH4)2] is has higher hydrogen capacity about 14.9 wt. % and the decomposition temparture of around 250 °C. However one flaw restraining its application is that some polyboron compounds like MgB12H12 preventing from further release of hydrogen. Adding some transition metals that form magnesium transition metal ternary borohydride [MgaTMb(BH4)c] may simply the decomposition process to release hydrogen with ternary borides (MgaTMbBc). The search for the probable ternary borides and the corresponding pseudo phase diagrams as well as the decomposition thermodynamics are performed using DFT calculations and GCLP method to present some possible candidates.
Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O.; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A.; Möllmann, Christian
2017-01-01
Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs. PMID:29136658
NASA Astrophysics Data System (ADS)
Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan
2017-08-01
Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.
NASA Astrophysics Data System (ADS)
Svintsitskiy, Dmitry A.; Kardash, Tatyana Yu.; Slavinskaya, Elena M.; Stonkus, Olga A.; Koscheev, Sergei V.; Boronin, Andrei I.
2018-01-01
The mixed silver-copper oxide Ag2Cu2O3 with a paramelaconite crystal structure is a promising material for catalytic applications. The as-prepared sample of Ag2Cu2O3 consisted of brick-like particles extended along the [001] direction. A combination of physicochemical techniques such as TEM, XPS and XRD was applied to investigate the structural features of this mixed silver-copper oxide. The thermal stability of Ag2Cu2O3 was investigated using in situ XRD under different reaction conditions, including a catalytic CO + O2 mixture. The first step of Ag2Cu2O3 decomposition was accompanied by the appearance of ensembles consisting of silver nanoparticles with sizes of 5-15 nm. Silver nanoparticles were strongly oriented to each other and to the surface of the initial Ag2Cu2O3 bricks. Based on the XRD data, it was shown that the release of silver occurred along the a and b axes of the paramelaconite structure. Partial decomposition of Ag2Cu2O3 accompanied by the formation of silver nanoparticles was observed during prolonged air storage under ambient conditions. The high reactivity is discussed as a reason for spontaneous decomposition during Ag2Cu2O3 storage. The full decomposition of the mixed oxide into metallic silver and copper (II) oxide took place at temperatures higher than 300 °C regardless of the nature of the reaction medium (helium, air, CO + O2). Catalytic properties of partially and fully decomposed samples of mixed silver-copper oxide were measured in low-temperature CO oxidation and C2H4 epoxidation reactions.
Decomposition of 2,4,6-trinitrotoluene (TNT) by gamma irradiation.
Lee, Byungjin; Lee, Myunjoo
2005-12-01
The purpose of this study was to evaluate the potential of gamma irradiation to decompose 2,4,6-trinitrotoluene (TNT) in an aqueous solution; the concentration range of the TNT solution was 0.11-0.44 mmol/L. The decomposition rate of TNT by gamma irradiation was pseudo-first-order kinetic over the applied initial concentrations. The dose constant was strongly dependent on the initial concentration of TNT. Increasing the concentration of dissolved oxygen in the solution was more effective on the decomposition of TNT as well as its mineralization. The required irradiation dose to remove 90% of initial TNT (0.44 mmol/L) was 58, 41, 32, 28, and 25 kGy at the dissolved oxygen concentration of 0.025, 0.149, 0.3, 0.538, and 0.822 mmol/L, respectively. However, TOC still remained as 30% of the initial TOC (3.19 mmol/L) when 200 kGy irradiation dose was applied to the TNT solution (0.44 mmol/L) containing dissolved oxygen of 0.822 mmol/L. The removal of the TNT was more efficient at a pH below 3 and at a pH above 11 than at neutral pH (pH 5-9). The required irradiation dose to remove over 99% of the initial TNT (0.44 mmol/L) was 39, 76, and 10 kGy at pH 2, 7, and 13, respectively. The dose constant was increased 1.6-fold and over 15.6-fold at pH 2 and 13, respectively, compared to that at pH 7. When an irradiation dose of 200 kGy was applied, the removal efficiencies of the TOC (initial concentration 3.19 mmol/L) were 91, 46, and 53% at pH 2, 7, and 13, respectively. Ammonia and nitrate were detected as the main nitrogen byproducts of TNT, and glyoxalic acid and oxalic acid were detected as organic byproducts.
Decomposition of fresh and anaerobically digested plant biomass in soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moorhead, K.K.; Graetz, D.A.; Reddy, K.R.
Using water hyacinth (Eichhornia crassipes (Mart.) Solms) for waste water renovation produces biomass that must be disposed of. This biomass may be anaerobically digested to produce CH/sub 4/ or added to soil directly as an amendment. In this study, fresh and anaerobically digested water hyacinth biomass, with either low or high N tissue content, were added to soil to evaluate C and N mineralization characteristics. The plant biomass was labeled with /sup 15/N before digestion. The fresh plant biomass and digested biomass sludge were freeze-dried and ground to pass a 0.84-mm sieve. The materials were thoroughly mixed with a Kindrickmore » fine sand at a rate of 5 g kg/sup -1/ soil and incubated for 90 d at 27/sup 0/C at a moisture content adjusted to 0.01 MPa. Decomposition was evaluated by CO/sub 2/ evolution and /sup 15/N mineralization. After 90 d, approximately 20% of the added C of the digested sludges had evolved as CO/sub 2/ compared to 39 and 50% of the added C of the fresh plant biomass with a low and high N content, respectively. First-order kinetics were used to describe decomposition stages. Mineralization of organic /sup 15/N to /sup 15/NO/sub 3//sup -/-N accounted for 8% of applied N for both digested sludges at 90 d. Nitrogen mineralization accounted for 3 and 33% of the applied organic N for fresh plant biomass with a low and high N content, respectively.« less
NASA Astrophysics Data System (ADS)
Carpentier, Pierre-Luc
In this thesis, we consider the midterm production planning problem (MTPP) of hydroelectricity generation under uncertainty. The aim of this problem is to manage a set of interconnected hydroelectric reservoirs over several months. We are particularly interested in high dimensional reservoir systems that are operated by large hydroelectricity producers such as Hydro-Quebec. The aim of this thesis is to develop and evaluate different decomposition methods for solving the MTPP under uncertainty. This thesis is divided in three articles. The first article demonstrates the applicability of the progressive hedging algorithm (PHA), a scenario decomposition method, for managing hydroelectric reservoirs with multiannual storage capacity under highly variable operating conditions in Canada. The PHA is a classical stochastic optimization method designed to solve general multistage stochastic programs defined on a scenario tree. This method works by applying an augmented Lagrangian relaxation on non-anticipativity constraints (NACs) of the stochastic program. At each iteration of the PHA, a sequence of subproblems must be solved. Each subproblem corresponds to a deterministic version of the original stochastic program for a particular scenario in the scenario tree. Linear and a quadratic terms must be included in subproblem's objective functions to penalize any violation of NACs. An important limitation of the PHA is due to the fact that the number of subproblems to be solved and the number of penalty terms increase exponentially with the branching level in the tree. This phenomenon can make the application of the PHA particularly difficult when the scenario tree covers several tens of time periods. Another important limitation of the PHA is caused by the fact that the difficulty level of NACs generally increases as the variability of scenarios increases. Consequently, applying the PHA becomes particularly challenging in hydroclimatic regions that are characterized by a high level of seasonal and interannual variability. These two types of limitations can slow down the algorithm's convergence rate and increase the running time per iteration. In this study, we apply the PHA on Hydro-Quebec's power system over a 92-week planning horizon. Hydrologic uncertainty is represented by a scenario tree containing 6 branching stages and 1,635 nodes. The PHA is especially well-suited for this particular application given that the company already possess a deterministic optimization model to solve the MTPP. The second article presents a new approach which enhances the performance of the PHA for solving general Mstochastic programs. The proposed method works by applying a multiscenario decomposition scheme on the stochastic program. Our heuristic method aims at constructing an optimal partition of the scenario set by minimizing the number of NACs on which an augmented Lagrangean relaxation must be applied. Each subproblem is a stochastic program defined on a group of scenarios. NACs linking scenarios sharing a common group are represented implicitly in subproblems by using a group-node system index instead of the traditional scenario-time index system. Only the NACs that link the different scenario groups are represented explicitly and relaxed. The proposed method is evaluated numerically on an hydroelectric reservoir management problem in Quebec. The results of this experiment show that our method has several advantages. Firstly, it allows to reduce the running time per iteration of the PHA by reducing the number of penalty terms that are included in the objective function and by reducing the amount of duplicated constraints and variables. In turn, this allows to reduce the running time per iteration of the algorithm. Secondly, it allows to increase the algorithm's convergence rate by reducing the variability of intermediary solutions at duplicated tree nodes. Thirdly, our approach reduces the amount of random-access memory (RAM) required for storing Lagrange multipliers associated with relaxed NACs. The third article presents an extension of the L-Shaped method designed specifically for managing hydroelectric reservoir systems with a high storage capacity. The method proposed in this paper enables to consider a higher branching level than conventional decomposition method enables. To achieve this, we assume that the stochastic process driving random parameters has a memory loss at time period t = tau. Because of this assumption, the scenario tree possess a special symmetrical structure at the second stage (t > tau). We exploit this feature using a two-stage Benders decomposition method. Each decomposition stage covers several consecutive time periods. The proposed method works by constructing a convex and piecewise linear recourse function that represents the expected cost at the second stage in the master problem. The subproblem and the master problem are stochastic program defined on scenario subtrees and can be solved using a conventional decomposition method or directly. We test the proposed method on an hydroelectric power system in Quebec over a 104-week planning horizon. (Abstract shortened by UMI.).
Image-based spectral distortion correction for photon-counting x-ray detectors
Ding, Huanjun; Molloi, Sabee
2012-01-01
Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Egri-Nagy, Attila; Nehaniv, Chrystopher L
2008-01-01
Beyond complexity measures, sometimes it is worthwhile in addition to investigate how complexity changes structurally, especially in artificial systems where we have complete knowledge about the evolutionary process. Hierarchical decomposition is a useful way of assessing structural complexity changes of organisms modeled as automata, and we show how recently developed computational tools can be used for this purpose, by computing holonomy decompositions and holonomy complexity. To gain insight into the evolution of complexity, we investigate the smoothness of the landscape structure of complexity under minimal transitions. As a proof of concept, we illustrate how the hierarchical complexity analysis reveals symmetries and irreversible structure in biological networks by applying the methods to the lac operon mechanism in the genetic regulatory network of Escherichia coli.
A Wavelet Polarization Decomposition Net Model for Polarimetric SAR Image Classification
NASA Astrophysics Data System (ADS)
He, Chu; Ou, Dan; Yang, Teng; Wu, Kun; Liao, Mingsheng; Chen, Erxue
2014-11-01
In this paper, a deep model based on wavelet texture has been proposed for Polarimetric Synthetic Aperture Radar (PolSAR) image classification inspired by recent successful deep learning method. Our model is supposed to learn powerful and informative representations to improve the generalization ability for the complex scene classification tasks. Given the influence of speckle noise in Polarimetric SAR image, wavelet polarization decomposition is applied first to obtain basic and discriminative texture features which are then embedded into a Deep Neural Network (DNN) in order to compose multi-layer higher representations. We demonstrate that the model can produce a powerful representation which can capture some untraceable information from Polarimetric SAR images and show a promising achievement in comparison with other traditional SAR image classification methods for the SAR image dataset.
Decompositions of injection patterns for nodal flow allocation in renewable electricity networks
NASA Astrophysics Data System (ADS)
Schäfer, Mirko; Tranberg, Bo; Hempel, Sabrina; Schramm, Stefan; Greiner, Martin
2017-08-01
The large-scale integration of fluctuating renewable power generation represents a challenge to the technical and economical design of a sustainable future electricity system. In this context, the increasing significance of long-range power transmission calls for innovative methods to understand the emerging complex flow patterns and to integrate price signals about the respective infrastructure needs into the energy market design. We introduce a decomposition method of injection patterns. Contrary to standard flow tracing approaches, it provides nodal allocations of link flows and costs in electricity networks by decomposing the network injection pattern into market-inspired elementary import/export building blocks. We apply the new approach to a simplified data-driven model of a European electricity grid with a high share of renewable wind and solar power generation.
NASA Astrophysics Data System (ADS)
Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias
2010-03-01
Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.
Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh
2014-01-01
Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.
Improving Distributed Diagnosis Through Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2011-01-01
Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.
Sanz, J M; Saiz, J M; González, F; Moreno, F
2011-07-20
In this research, the polar decomposition (PD) method is applied to experimental Mueller matrices (MMs) measured on two-dimensional microstructured surfaces. Polarization information is expressed through a set of parameters of easier physical interpretation. It is shown that evaluating the first derivative of the retardation parameter, δ, a clear indication of the presence of defects either built on or dug in the scattering flat surface (a silicon wafer in our case) can be obtained. Although the rule of thumb thus obtained is established through PD, it can be easily implemented on conventional surface polarimetry. These results constitute an example of the capabilities of the PD approach to MM analysis, and show a direct application in surface characterization. © 2011 Optical Society of America
Temporal structure of neuronal population oscillations with empirical model decomposition
NASA Astrophysics Data System (ADS)
Li, Xiaoli
2006-08-01
Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations.
Cockle, Diane L; Bell, Lynne S
2015-08-01
Human decomposition is a complex biological process driven by an array of variables which are not clearly understood. The medico-legal community have long been searching for a reliable method to establish the post-mortem interval (PMI) for those whose deaths have either been hidden, or gone un-noticed. To date, attempts to develop a PMI estimation method based on the state of the body either at the scene or at autopsy have been unsuccessful. One recent study has proposed that two simple formulae, based on the level of decomposition humidity and temperature, could be used to accurately calculate the PMI for bodies outside, on or under the surface worldwide. This study attempted to validate 'Formula I' [1] (for bodies on the surface) using 42 Canadian cases with known PMIs. The results indicated that bodies exposed to warm temperatures consistently overestimated the known PMI by a large and inconsistent margin for Formula I estimations. And for bodies exposed to cold and freezing temperatures (less than 4°C), then the PMI was dramatically under estimated. The ability of 'Formulae II' to estimate the PMI for buried bodies was also examined using a set of 22 known Canadian burial cases. As these cases used in this study are retrospective, some of the data needed for Formula II was not available. The 4.6 value used in Formula II to represent the standard ratio of time that burial decelerates the rate of decomposition was examined. The average time taken to achieve each stage of decomposition both on, and under the surface was compared for the 118 known cases. It was found that the rate of decomposition was not consistent throughout all stages of decomposition. The rates of autolysis above and below the ground were equivalent with the buried cases staying in a state of putrefaction for a prolonged period of time. It is suggested that differences in temperature extremes and humidity levels between geographic regions may make it impractical to apply formulas developed in one region to any other region. These results also suggest that there are other variables, apart from temperature and humidity that may impact the rate of human decomposition. These variables, or complex of variables, are considered regionally specific. Neither of the Universal Formulae performed well, and our results do not support the proposition of Universality for PMI estimation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Soil microorganisms are considered the most effective decomposers of applied crop residues, but it is poorly understood which communities are primarily responsible for decomposition under different conditions. A pot experiment was conducted in a greenhouse to follow the cycling of C and N derived fr...
Yongqiang Liu
2003-01-01
The relations between monthly-seasonal soil moisture and precipitation variability are investigated by identifying the coupled patterns of the two hydrological fields using singular value decomposition (SVD). SVD is a technique of principal component analysis similar to empirical orthogonal knctions (EOF). However, it is applied to two variables simultaneously and is...
Application of time series analysis for assessing reservoir trophic status
Paris Honglay Chen; Ka-Chu Leung
2000-01-01
This study is to develop and apply a practical procedure for the time series analysis of reservoir eutrophication conditions. A multiplicative decomposition method is used to determine the trophic variations including seasonal, circular, long-term and irregular changes. The results indicate that (1) there is a long high peak for seven months from April to October...
Influence of endrin on soil microbial populations and their activity.
W.B. Bollen; C.M. Tu
1971-01-01
Endrin applied to soil at rates of more than three times the maximum that might be expected from application of endrin-treated tree seed exerted no appreciable effect on numbers of soil microbes or on ammonification, nitrification, or sulfur oxidation. The decomposition of soil organic matter, as indicated by the production of CO2, was increased...
USDA-ARS?s Scientific Manuscript database
The chemical composition of forages consumed by ruminants affects forage intake, digestion, meat and milk production, as well as manure chemistry and manure impacts on the environment. The digestion of forages by ruminants and the decomposition of organic materials applied to soils are governed by s...
NASA Astrophysics Data System (ADS)
Shaw, E. A.; Denef, K.; Milano de Tomasel, C.; Cotrufo, M. F.; Wall, D. H.
2015-09-01
Root litter decomposition is a major component of carbon (C) cycling in grasslands, where it provides energy and nutrients for soil microbes and fauna. This is especially important in grasslands where fire is a common management practice and removes aboveground litter accumulation. In this study, we investigated whether fire affects root decomposition and C flow through the belowground food web. In a greenhouse experiment, we applied 13C-enriched big bluestem (Andropogon gerardii) root litter to intact tallgrass prairie soil cores collected from annually burned (AB) and infrequently burned (IB) treatments at the Konza Prairie Long Term Ecological Research (LTER) site. Incorporation of 13C into microbial phospholipid fatty acids and nematode trophic groups was measured on six occasions during a 180-day decomposition study to determine how C was translocated through the soil food web. Results showed significantly different soil communities between treatments and higher microbial abundance for IB. Root decomposition occurred rapidly and was significantly greater for AB. Microbes and their nematode consumers immediately assimilated root litter C in both treatments. Root litter C was preferentially incorporated in a few groups of microbes and nematodes, but depended on burn treatment: fungi, Gram-negative bacteria, Gram-positive bacteria, and fungivore nematodes for AB and only omnivore nematodes for IB. The overall microbial pool of root litter-derived C significantly increased over time but was not significantly different between burn treatments. The nematode pool of root litter-derived C also significantly increased over time, and was significantly higher for the AB treatment at 35 and 90 days after litter addition. In conclusion, the C flow from root litter to microbes to nematodes is not only measurable, but significant, indicating that higher nematode trophic levels are critical components of C flow during root decomposition which, in turn, is significantly affected by fire management practices. Not only does fire affect the soil community and root decomposition for Konza Prairie LTER soils, but the lower microbial abundance, greater root turnover, and the increased incorporation of root litter C by microbes and nematodes for AB suggests that tallgrass prairie management through annual burning increases root litter-derived C flow through the soil food web.
Modeling diffusion control on organic matter decomposition in unsaturated soil pore space
NASA Astrophysics Data System (ADS)
Vogel, Laure; Pot, Valérie; Garnier, Patricia; Vieublé-Gonod, Laure; Nunan, Naoise; Raynaud, Xavier; Chenu, Claire
2014-05-01
Soil Organic Matter decomposition is affected by soil structure and water content, but field and laboratory studies about this issue conclude to highly variable outcomes. Variability could be explained by the discrepancy between the scale at which key processes occur and the measurements scale. We think that physical and biological interactions driving carbon transformation dynamics can be best understood at the pore scale. Because of the spatial disconnection between carbon sources and decomposers, the latter rely on nutrient transport unless they can actively move. In hydrostatic case, diffusion in soil pore space is thus thought to regulate biological activity. In unsaturated conditions, the heterogeneous distribution of water modifies diffusion pathways and rates, thus affects diffusion control on decomposition. Innovative imaging and modeling tools offer new means to address these effects. We have developed a new model based on the association between a 3D Lattice-Boltzmann Model and an adimensional decomposition module. We designed scenarios to study the impact of physical (geometry, saturation, decomposers position) and biological properties on decomposition. The model was applied on porous media with various morphologies. We selected three cubic images of 100 voxels side from µCT-scanned images of an undisturbed soil sample at 68µm resolution. We used LBM to perform phase separation and obtained water phase distributions at equilibrium for different saturation indices. We then simulated the diffusion of a simple soluble substrate (glucose) and its consumption by bacteria. The same mass of glucose was added as a pulse at the beginning of all simulations. Bacteria were placed in few voxels either regularly spaced or concentrated close to or far from the glucose source. We modulated physiological features of decomposers in order to weight them against abiotic conditions. We could evidence several effects creating unequal substrate access conditions for decomposers, hence inducing contrasted decomposition kinetics: position of bacteria relative to the substrate diffusion pathways, diffusion rate and hydraulic connectivity between bacteria and substrate source, local substrate enrichment due to restricted mass transfer. Physiological characteristics had a strong impact on decomposition only when glucose diffused easily but not when diffusion limitation prevailed. This suggests that carbon dynamics should not be considered to derive from decomposers' physiology alone but rather from the interactions of biological and physical processes at the microscale.
Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering
NASA Technical Reports Server (NTRS)
Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)
2001-01-01
Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.
2006-04-21
C. M., and Prendergast, J. P., 2002, "Thermial Analysis of Hypersonic Inlet Flow with Exergy -Based Design Methods," International Journal of Applied...parametric study of the PS and its components is first presented in order to show the type of detailed information on internal system losses which an exergy ...Thermoeconomic Isolation Applied to the Optimal Synthesis/Design of an Advanced Fighter Aircraft System," International Journal of Thermodynamics, ICAT
Solving periodic block tridiagonal systems using the Sherman-Morrison-Woodbury formula
NASA Technical Reports Server (NTRS)
Yarrow, Maurice
1989-01-01
Many algorithms for solving the Navier-Stokes equations require the solution of periodic block tridiagonal systems of equations. By applying a splitting to the matrix representing this system of equations, it may first be reduced to a block tridiagonal matrix plus an outer product of two block vectors. The Sherman-Morrison-Woodbury formula is then applied. The algorithm thus reduces a periodic banded system to a non-periodic banded system with additional right-hand sides and is of higher efficiency than standard Thomas algorithm/LU decompositions.
Wavelet-bounded empirical mode decomposition for measured time series analysis
NASA Astrophysics Data System (ADS)
Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2018-01-01
Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E
2017-06-01
The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieder, William R.; Allison, Steven D.; Davidson, Eric A.
Microbes influence soil organic matter (SOM) decomposition and the long-term stabilization of carbon (C) in soils. We contend that by revising the representation of microbial processes and their interactions with the physicochemical soil environment, Earth system models (ESMs) may make more realistic global C cycle projections. Explicit representation of microbial processes presents considerable challenges due to the scale at which these processes occur. Thus, applying microbial theory in ESMs requires a framework to link micro-scale process-level understanding and measurements to macro-scale models used to make decadal- to century-long projections. Here, we review the diversity, advantages, and pitfalls of simulating soilmore » biogeochemical cycles using microbial-explicit modeling approaches. We present a roadmap for how to begin building, applying, and evaluating reliable microbial-explicit model formulations that can be applied in ESMs. Drawing from experience with traditional decomposition models we suggest: (1) guidelines for common model parameters and output that can facilitate future model intercomparisons; (2) development of benchmarking and model-data integration frameworks that can be used to effectively guide, inform, and evaluate model parameterizations with data from well-curated repositories; and (3) the application of scaling methods to integrate microbial-explicit soil biogeochemistry modules within ESMs. With contributions across scientific disciplines, we feel this roadmap can advance our fundamental understanding of soil biogeochemical dynamics and more realistically project likely soil C response to environmental change at global scales.« less
Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.
Koch, Ina; Nöthen, Joachim; Schleiff, Enrico
2017-01-01
Motivation: Arabidopsis thaliana is a well-established model system for the analysis of the basic physiological and metabolic pathways of plants. Nevertheless, the system is not yet fully understood, although many mechanisms are described, and information for many processes exists. However, the combination and interpretation of the large amount of biological data remain a big challenge, not only because data sets for metabolic paths are still incomplete. Moreover, they are often inconsistent, because they are coming from different experiments of various scales, regarding, for example, accuracy and/or significance. Here, theoretical modeling is powerful to formulate hypotheses for pathways and the dynamics of the metabolism, even if the biological data are incomplete. To develop reliable mathematical models they have to be proven for consistency. This is still a challenging task because many verification techniques fail already for middle-sized models. Consequently, new methods, like decomposition methods or reduction approaches, are developed to circumvent this problem. Methods: We present a new semi-quantitative mathematical model of the metabolism of Arabidopsis thaliana . We used the Petri net formalism to express the complex reaction system in a mathematically unique manner. To verify the model for correctness and consistency we applied concepts of network decomposition and network reduction such as transition invariants, common transition pairs, and invariant transition pairs. Results: We formulated the core metabolism of Arabidopsis thaliana based on recent knowledge from literature, including the Calvin cycle, glycolysis and citric acid cycle, glyoxylate cycle, urea cycle, sucrose synthesis, and the starch metabolism. By applying network decomposition and reduction techniques at steady-state conditions, we suggest a straightforward mathematical modeling process. We demonstrate that potential steady-state pathways exist, which provide the fixed carbon to nearly all parts of the network, especially to the citric acid cycle. There is a close cooperation of important metabolic pathways, e.g., the de novo synthesis of uridine-5-monophosphate, the γ-aminobutyric acid shunt, and the urea cycle. The presented approach extends the established methods for a feasible interpretation of biological network models, in particular of large and complex models.
Koch, Ina; Nöthen, Joachim; Schleiff, Enrico
2017-01-01
Motivation: Arabidopsis thaliana is a well-established model system for the analysis of the basic physiological and metabolic pathways of plants. Nevertheless, the system is not yet fully understood, although many mechanisms are described, and information for many processes exists. However, the combination and interpretation of the large amount of biological data remain a big challenge, not only because data sets for metabolic paths are still incomplete. Moreover, they are often inconsistent, because they are coming from different experiments of various scales, regarding, for example, accuracy and/or significance. Here, theoretical modeling is powerful to formulate hypotheses for pathways and the dynamics of the metabolism, even if the biological data are incomplete. To develop reliable mathematical models they have to be proven for consistency. This is still a challenging task because many verification techniques fail already for middle-sized models. Consequently, new methods, like decomposition methods or reduction approaches, are developed to circumvent this problem. Methods: We present a new semi-quantitative mathematical model of the metabolism of Arabidopsis thaliana. We used the Petri net formalism to express the complex reaction system in a mathematically unique manner. To verify the model for correctness and consistency we applied concepts of network decomposition and network reduction such as transition invariants, common transition pairs, and invariant transition pairs. Results: We formulated the core metabolism of Arabidopsis thaliana based on recent knowledge from literature, including the Calvin cycle, glycolysis and citric acid cycle, glyoxylate cycle, urea cycle, sucrose synthesis, and the starch metabolism. By applying network decomposition and reduction techniques at steady-state conditions, we suggest a straightforward mathematical modeling process. We demonstrate that potential steady-state pathways exist, which provide the fixed carbon to nearly all parts of the network, especially to the citric acid cycle. There is a close cooperation of important metabolic pathways, e.g., the de novo synthesis of uridine-5-monophosphate, the γ-aminobutyric acid shunt, and the urea cycle. The presented approach extends the established methods for a feasible interpretation of biological network models, in particular of large and complex models. PMID:28713420
Moison, Ralf M W; Rijnkels, Jolanda M; Podda, Elena; Righele, Francesca; Tomasello, Federica; Caffieri, Sergio; Beijersbergen van Henegouwen, Gerard M J
2003-04-01
Exposure of the nonsteroidal anti-inflammatory drug suprofen (SUP) to UV-radiation results in the formation of radicals, reactive oxygen species (ROS), photodecarboxylated products and photoadducts with biomacromolecules. Using an ex vivo pigskin explant model, we investigated whether topical coapplication of the water-soluble antioxidants vitamin C (Lascorbic acid, ASC), N-acetyl-L-cysteine (NAC) or L-cysteine ethylester (CYSET) with SUP reduced ultraviolet A (UVA)-induced decomposition of SUP. UVA-induced changes in antioxidant bioavailability in the stratum corneum and epidermis were also studied. Epidermal bioavailability of SUP in sham-irradiated pigskin increased 2.2- to 4.1-fold after the lowest antioxidant doses (P < 0.05). As compared with no applied antioxidant, increasing doses of all tested antioxidants resulted in increased levels of SUP and decreased levels of photoproducts (P < 0.05). A maximal protection against SUP photodegradation of 70% was found after an ASC dose of 1 micromol/cm2; these values were 60% for a NAC dose of 10 micromol/cm2 and 50% for a CYSET dose of 5 micromol/cm2. Skin antioxidant levels increased with increasing applied dose (P < 0.05); the bioavailability of CYSET was approximately three-fold lower than that of ASC and NAC. UVA exposure resulted in 30-50% consumption of the topically applied ASC or NAC in the stratum corneum, whereas CYSET was not consumed. In conclusion, the topically applied water-soluble antioxidants ASC, NAC and CYSET protect against UVA-induced decomposition of SUP by scavenging radicals and ROS. Coapplication of these antioxidants may therefore be an effective way to reduce or prevent the phototoxic effects of SUP in vivo.
Flash nano-precipitation of polymer blends: a role for fluid flow?
NASA Astrophysics Data System (ADS)
Grundy, Lorena; Mason, Lachlan; Chergui, Jalel; Juric, Damir; Craster, Richard V.; Lee, Victoria; Prudhomme, Robert; Priestley, Rodney; Matar, Omar K.
2017-11-01
Porous structures can be formed by the controlled precipitation of polymer blends; ranging from porous matrices, with applications in membrane filtration, to porous nano-particles, with applications in catalysis, targeted drug delivery and emulsion stabilisation. Under a diffusive exchange of solvent for non-solvent, prevailing conditions favour the decomposition of polymer blends into multiple phases. Interestingly, dynamic structures can be `trapped' via vitrification prior to thermodynamic equilibrium. A promising mechanism for large-scale polymer processing is flash nano-precipitation (FNP). FNP particle formation has recently been modelled using spinodal decomposition theory, however the influence of fluid flow on structure formation is yet to be clarified. In this study, we couple a Navier-Stokes equation to a Cahn-Hilliard model of spinodal decomposition. The framework is implemented using Code BLUE, a massively scalable fluid dynamics solver, and applied to flows within confined impinging jet mixers. The present method is valid for a wide range of mixing timescales spanning FNP and conventional immersion precipitation processes. Results aid in the fabrication of nano-scale polymer particles with tuneable internal porosities. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM), PETRONAS.
NASA Astrophysics Data System (ADS)
Li, Ning; Yang, Jianguo; Zhou, Rui; Liang, Caiping
2016-04-01
Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner-Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated.
Kreutzweiser, David P; Good, Kevin P; Chartrand, Derek T; Scarr, Taylor A; Thompson, Dean G
2008-01-01
The systemic insecticide imidacloprid may be applied to deciduous trees for control of the Asian longhorned beetle, an invasive wood-boring insect. Senescent leaves falling from systemically treated trees contain imidacloprid concentrations that could pose a risk to natural decomposer organisms. We examined the effects of foliar imidacloprid concentrations on decomposer organisms by adding leaves from imidacloprid-treated sugar maple trees to aquatic and terrestrial microcosms under controlled laboratory conditions. Imidacloprid in maple leaves at realistic field concentrations (3-11 mg kg(-1)) did not affect survival of aquatic leaf-shredding insects or litter-dwelling earthworms. However, adverse sublethal effects at these concentrations were detected. Feeding rates by aquatic insects and earthworms were reduced, leaf decomposition (mass loss) was decreased, measurable weight losses occurred among earthworms, and aquatic and terrestrial microbial decomposition activity was significantly inhibited. Results of this study suggest that sugar maple trees systemically treated with imidacloprid to control Asian longhorned beetles may yield senescent leaves with residue levels sufficient to reduce natural decomposition processes in aquatic and terrestrial environments through adverse effects on non-target decomposer organisms.
Graphical Methods for Quantifying Macromolecules through Bright Field Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D.
Bright ?eld imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition, and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to Flbroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color imagesmore » into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quanti?ed by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of samplepreparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions, and (iv) has a superior computing performance« less
Hou, Yanbei; Hu, Weizhao; Gui, Zhou; Hu, Yuan
2017-07-15
Cuprous oxide (Cu 2 O) as an effective catalyst has been applied to enhance the fire safety of unsaturated polyester resin (UPR), but the particle size influence on combustion behaviors has not been previously reported. Herein, the UPR/Cu 2 O composites (metal oxide particles with average particle-size of 10, 100, and 200nm) were successfully synthesized by thermosetting process. The effects of Cu 2 O with different sizes on thermostability and combustion behaviors of UPR were characterized by TGA, MCC, TG-IR, FTIR, and SSTF. The results revel that the addition of Cu 2 O contributes to sufficient decomposition of oxygen-containing compounds, which is beneficial to the release of nontoxic compounds. The smallest-sized Cu 2 O performs the excellent catalytic decomposition effect and promotes the complete combustion of UPR, which benefits the enhancement of fire safety. While the other additives retard pyrolysis process and yield more char residue, and thus the flame retardancy of UPR composites was improved. Therefore, catalysis plays a major role for smaller-sized particles during thermal decomposition of matrix, while flame retarded effect became gradual distinctly for the larger-sized additives. Copyright © 2017 Elsevier B.V. All rights reserved.
Fast flux module detection using matroid theory.
Reimers, Arne C; Bruggeman, Frank J; Olivier, Brett G; Stougie, Leen
2015-05-01
Flux balance analysis (FBA) is one of the most often applied methods on genome-scale metabolic networks. Although FBA uniquely determines the optimal yield, the pathway that achieves this is usually not unique. The analysis of the optimal-yield flux space has been an open challenge. Flux variability analysis is only capturing some properties of the flux space, while elementary mode analysis is intractable due to the enormous number of elementary modes. However, it has been found by Kelk et al. (2012) that the space of optimal-yield fluxes decomposes into flux modules. These decompositions allow a much easier but still comprehensive analysis of the optimal-yield flux space. Using the mathematical definition of module introduced by Müller and Bockmayr (2013b), we discovered useful connections to matroid theory, through which efficient algorithms enable us to compute the decomposition into modules in a few seconds for genome-scale networks. Using that every module can be represented by one reaction that represents its function, in this article, we also present a method that uses this decomposition to visualize the interplay of modules. We expect the new method to replace flux variability analysis in the pipelines for metabolic networks.
Stendahl, Johan; Berg, Björn; Lindahl, Björn D
2017-11-14
Carbon sequestration below ground depends on organic matter input and decomposition, but regulatory bottlenecks remain unclear. The relative importance of plant production, climate and edaphic factors has to be elucidated to better predict carbon storage in forests. In Swedish forest soil inventory data from across the entire boreal latitudinal range (n = 2378), the concentration of exchangeable manganese was singled out as the strongest predictor (R 2 = 0.26) of carbon storage in the extensive organic horizon (mor layer), which accounts for one third of the total below ground carbon. In comparison, established ecosystem models applied on the same data have failed to predict carbon stocks (R 2 < 0.05), and in our study manganese availability overshadowed both litter production and climatic factors. We also identified exchangeable potassium as an additional strong predictor, however strongly correlated with manganese. The negative correlation between manganese and carbon highlights the importance of Mn-peroxidases in oxidative decomposition of recalcitrant organic matter. The results support the idea that the fungus-driven decomposition could be a critical factor regulating humus carbon accumulation in boreal forests, as Mn-peroxidases are specifically produced by basidiomycetes.
Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui
2017-03-27
Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.
Liu, Yu; Hu, Xiao-Fei; Chen, Fu-Sheng; Yuan, Ping-Cheng
2013-06-01
Rhizospheric and non-rhizospheric soils and the absorption, transition, and storage roots were sampled from the mid-subtropical Pinus massoniana and Castanopsis sclerophylla forests to study the CO2 fluxes from soil mineralization and root decomposition in the forests. The samples were incubated in closed jars at 15 degrees C, 25 degrees C, 35 degrees C, and 45 degrees C, respectively, and alkali absorption method was applied to measure the CO2 fluxes during 53 days incubation. For the two forests, the rhizospheric effect (ratio of rhizospheric to non-rhizospheric soil) on the CO2 flux from soil mineralization across all incubation temperature ranged from 1.12 to 3.09, with a decreasing trend along incubation days. There was no significant difference in the CO2 flux from soil mineralization between the two forests at 15 degrees C, but the CO2 flux was significantly higher in P. massoniana forest than in C. sclerophylla forest at 25 degrees C and 35 degrees C, and in an opposite pattern at 45 degrees C. At all incubation temperature, the CO2 release from the absorption root decomposition was higher than that from the transition and storage roots decomposition, and was smaller in P. massoniana than in C. sclerophylla forest for all the root functional types. The Q10 values of the CO2 fluxes from the two forests were higher for soils (1.21-1.83) than for roots (0.96-1.36). No significant differences were observed in the Q10 values of the CO2 flux from soil mineralization between the two forests, but the Q10 value of the CO2 flux from root decomposition was significantly higher in P. massoniana than in C. sclerophylla forest. It was suggested that the increment of CO2 flux from soil mineralization under global warming was far higher than that from root decomposition, and for P. massoniana than for C. sclerophylla forest. In subtropics of China, the adaptability of zonal climax community to global warming would be stronger than that of pioneer community.
NASA Astrophysics Data System (ADS)
Zhong, Jun
Density functional theory (DFT) is employed to study lubricant adsorption and decomposition pathways, and adhesive metal transfer on clean aluminum surfaces. In this dissertation, density functional theory (DFT-GGA) is used to investigate the optimal adsorption geometries and binding energies of vinyl-phosphonic and ethanoic acids on an A1(111) surface. Tri-bridged, bi-bridged and uni-dentate coordinations for adsorbates are examined to determine the optimal binding sites on the surface. An analysis of the charge density of states (DOS) of oxygen involved in reacting with aluminum ions reveals changes in the atomic bonding configuration. For these acid molecules, the favorable decomposition pathways lead to fragments of vinyl- and alkylchains bonding to the Al(111) surface with phosphorous and carbon ions. Final optimal decomposition geometries and binding energies for various decomposition stages are also discussed. In addition, ab-initio molecular dynamics (AIMD) is carried out to explore collisions of aliphatic lubricants like butanol-alcohol and butanoic-acid with the Al(111) surface. Simulation results indicate that functional oxygen groups on these molecules could react with the "islands of nascent aluminum" and oxidize the surface. Favorable decomposition pieces on the surface, which were corroborated with experiment and DFT calculations, are found to contribute to the effectiveness of a particular molecule for boundary thinfilm lubrication to reduce the wear of aluminum. Finally, ab-initio molecular dynamics is also applied to investigations of the interaction between aluminum and hematite surfaces with and without a vinyl-phosphonic acid (VPA) lubricant. Without the lubricant, hematite is found to react with Al strongly (thermit reaction). This removes relatively large fragments from the surface of the aluminum substrate when this substrate is rubbed with a harder steel-roller under an external shock contact-load exceeding the ability of the substrate to support the aluminum-oxide film. Adhesive wear is found to significantly raise the temperature of system. Addition of VPA lubricant is found to retard the reaction of hematite with aluminum by forming an effective barrier between the two surfaces.
ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ketusky, E.; Subramanian, K.
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include:more » (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing dissolution equilibrium, and then decomposed to {le} 100 Parts per Million (ppm) oxalate. Since AOP technology largely originated on using ultraviolet (UV) light as a primary catalyst, decomposition of the spent oxalic acid, well exposed to a medium pressure mercury vapor light was considered the benchmark. However, with multi-valent metals already contained in the feed, and maintenance of the UV light a concern; testing was conducted to evaluate the impact from removing the UV light. Using current AOP terminology, the test without the UV light would likely be considered an ozone based, dark, ferrioxalate type, decomposition process. Specifically, as part of the testing, the impacts from the following were investigated: (1) Importance of the UV light on the decomposition rates when decomposing 1 wt% spent oxalic acid; (2) Impact of increasing the oxalic acid strength from 1 to 2.5 wt% on the decomposition rates; and (3) For F-area testing, the advantage of increasing the spent oxalic acid flowrate from 40 L/min (liters/minute) to 50 L/min during decomposition of the 2.5 wt% spent oxalic acid. The results showed that removal of the UV light (from 1 wt% testing) slowed the decomposition rates in both the F & H testing. Specifically, for F-Area Strike 1, the time increased from about 6 hours to 8 hours. In H-Area, the impact was not as significant, with the time required for Strike 1 to be decomposed to less than 100 ppm increasing slightly, from 5.4 to 6.4 hours. For the spent 2.5 wt% oxalic acid decomposition tests (all) without the UV light, the F-area decompositions required approx. 10 to 13 hours, while the corresponding required H-Area decompositions times ranged from 10 to 21 hours. For the 2.5 wt% F-Area sludge, the increased availability of iron likely caused the increased decomposition rates compared to the 1 wt% oxalic acid based tests. In addition, for the F-testing, increasing the recirculation flow rates from 40 liter/minute to 50 liter/minute resulted in an increased decomposition rate, suggesting a better use of ozone.« less
NASA Astrophysics Data System (ADS)
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Saez-Rodriguez, Julio; Gayer, Stefan; Ginkel, Martin; Gilles, Ernst Dieter
2008-08-15
The modularity of biochemical networks in general, and signaling networks in particular, has been extensively studied over the past few years. It has been proposed to be a useful property to analyze signaling networks: by decomposing the network into subsystems, more manageable units are obtained that are easier to analyze. While many powerful algorithms are available to identify modules in protein interaction networks, less attention has been paid to signaling networks de.ned as chemical systems. Such a decomposition would be very useful as most quantitative models are de.ned using the latter, more detailed formalism. Here, we introduce a novel method to decompose biochemical networks into modules so that the bidirectional (retroactive) couplings among the modules are minimized. Our approach adapts a method to detect community structures, and applies it to the so-called retroactivity matrix that characterizes the couplings of the network. Only the structure of the network, e.g. in SBML format, is required. Furthermore, the modularized models can be loaded into ProMoT, a modeling tool which supports modular modeling. This allows visualization of the models, exploiting their modularity and easy generation of models of one or several modules for further analysis. The method is applied to several relevant cases, including an entangled model of the EGF-induced MAPK cascade and a comprehensive model of EGF signaling, demonstrating its ability to uncover meaningful modules. Our approach can thus help to analyze large networks, especially when little a priori knowledge on the structure of the network is available. The decomposition algorithms implemented in MATLAB (Mathworks, Inc.) are freely available upon request. ProMoT is freely available at http://www.mpi-magdeburg.mpg.de/projects/promot. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Kawabe, Yutaka; Yoshikawa, Toshio; Chida, Toshifumi; Tada, Kazuhiro; Kawamoto, Masuki; Fujihara, Takashi; Sassa, Takafumi; Tsutsumi, Naoto
2015-10-01
In order to analyze the spectra of inseparable chemical mixtures, many mathematical methods have been developed to decompose them into the components relevant to species from series of spectral data obtained under different conditions. We formulated a method based on singular value decomposition (SVD) of linear algebra, and applied it to two example systems of organic dyes, being successful in reproducing absorption spectra assignable to cis/trans azocarbazole dyes from the spectral data after photoisomerization and to monomer/dimer of cyanine dyes from those during photodegaradation process. For the example of photoisomerization, polymer films containing the azocarbazole dyes were prepared, which have showed updatable holographic stereogram for real images with high performance. We made continuous monitoring of absorption spectrum after optical excitation and found that their spectral shapes varied slightly after the excitation and during recovery process, of which fact suggested the contribution from a generated photoisomer. Application of the method was successful to identify two spectral components due to trans and cis forms of azocarbazoles. Temporal evolution of their weight factors suggested important roles of long lifetimed cis states in azocarbazole derivatives. We also applied the method to the photodegradation of cyanine dyes doped in DNA-lipid complexes which have shown efficient and durable optical amplification and/or lasing under optical pumping. The same SVD method was successful in the extraction of two spectral components presumably due to monomer and H-type dimer. During the photodegradation process, absorption magnitude gradually decreased due to decomposition of molecules and their decaying rates strongly depended on the spectral components, suggesting that the long persistency of the dyes in DNA-complex related to weak tendency of aggregate formation.
Microbial Abundances in Salt Marsh Soils: A Molecular Approach for Small Spatial Scales
NASA Astrophysics Data System (ADS)
Granse, Dirk; Mueller, Peter; Weingartner, Magdalena; Hoth, Stefan; Jensen, Kai
2016-04-01
The rate of biological decomposition greatly determines the carbon sequestration capacity of salt marshes. Microorganisms are involved in the decomposition of biomass and the rate of decomposition is supposed to be related to microbial abundance. Recent studies quantified microbial abundance by means of quantitative polymerase chain reaction (QPCR), a method that also allows determining the microbial community structure by applying specific primers. The main microbial community structure can be determined by using primers specific for 16S rRNA (Bacteria) and 18S rRNA (Fungi) of the microbial DNA. However, the investigation of microbial abundance pattern at small spatial scales, such as locally varying abiotic conditions within a salt-marsh system, requires high accuracy in DNA extraction and QPCR methods. Furthermore, there is evidence that a single extraction may not be sufficient to reliably quantify rRNA gene copies. The aim of this study was to establish a suitable DNA extraction method and stable QPCR conditions for the measurement of microbial abundances in semi-terrestrial environments. DNA was extracted from two soil samples (top WE{5}{cm}) by using the PowerSoil DNA Extraction Kit (Mo Bio Laboratories, Inc., Carlsbad, CA) and applying a modified extraction protocol. The DNA extraction was conducted in four consecutive DNA extraction loops from three biological replicates per soil sample by reusing the PowerSoil bead tube. The number of Fungi and Bacteria rRNA gene copies of each DNA extraction loop and a pooled DNA solution (extraction loop 1 - 4) was measured by using the QPCR method with taxa specific primer pairs (Bacteria: B341F, B805R; Fungi: FR1, FF390). The DNA yield of the replicates varied at DNA extraction loop 1 between WE{25 and 85}{ng
The Dynamics of the Evolution of the Black-White Test Score Gap
ERIC Educational Resources Information Center
Sohn, Kitae
2012-01-01
We apply a quantile version of the Oaxaca-Blinder decomposition to estimate the counterfactual distribution of the test scores of Black students. In the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS-K), we find that the gap initially appears only at the top of the distribution of test scores. As children age, however,…
NASA Astrophysics Data System (ADS)
Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an
2017-09-01
High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.
Kreutzweiser, David; Good, Kevin; Chartrand, Derek; Scarr, Taylor; Thompson, Dean
2007-11-01
Imidacloprid is effective against emerald ash borer when applied as a systemic insecticide. Following stem or soil injections to trees in riparian areas, imidacloprid residues could be indirectly introduced to aquatic systems via leaf fall or leaching. Either route of exposure may affect non-target, aquatic decomposer organisms. Leaves from ash trees treated with imidacloprid at two field rates and an intentionally-high concentration were added to aquatic microcosms. Leaves from trees treated at the two field rates contained imidacloprid concentrations of 0.8-1.3 ppm, and did not significantly affect leaf-shredding insect survival, microbial respiration or microbial decomposition rates. Insect feeding rates were significantly inhibited at foliar concentrations of 1.3 ppm but not at 0.8 ppm. Leaves from intentionally high-dose trees contained concentrations of about 80 ppm, and resulted in 89-91% mortality of leaf-shredding insects, but no adverse effects on microbial respiration and decomposition rates. Imidacloprid applied directly to aquatic microcosms to simulate leaching from soils was at least 10 times more toxic to aquatic insects than the foliar concentrations, with high mortality at 0.13 ppm and significant feeding inhibition at 0.012 ppm.
On some Aitken-like acceleration of the Schwarz method
NASA Astrophysics Data System (ADS)
Garbey, M.; Tromeur-Dervout, D.
2002-12-01
In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun
2016-03-01
In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Convergence issues in domain decomposition parallel computation of hovering rotor
NASA Astrophysics Data System (ADS)
Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong
2018-05-01
Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.
NASA Astrophysics Data System (ADS)
Jain, Shobhit; Tiso, Paolo; Haller, George
2018-06-01
We apply two recently formulated mathematical techniques, Slow-Fast Decomposition (SFD) and Spectral Submanifold (SSM) reduction, to a von Kármán beam with geometric nonlinearities and viscoelastic damping. SFD identifies a global slow manifold in the full system which attracts solutions at rates faster than typical rates within the manifold. An SSM, the smoothest nonlinear continuation of a linear modal subspace, is then used to further reduce the beam equations within the slow manifold. This two-stage, mathematically exact procedure results in a drastic reduction of the finite-element beam model to a one-degree-of freedom nonlinear oscillator. We also introduce the technique of spectral quotient analysis, which gives the number of modes relevant for reduction as output rather than input to the reduction process.
Small nickel nanoparticle arrays from long chain imidazolium ionic liquids
Yang, Mei; Campbell, Paul S.; Santini, Catherine C.; ...
2013-11-08
A series of six long chain alkyl mono- and bi-cationic imidazolium based salts with bis(trifluoromethylsulfonyl)imide (NTf 2–) as the anion were synthesized and characterized. Single crystal structure of 1-methyl-3-octadecylimidazolium bis(trifluoromethylsulfonyl)imide could be obtained by X-ray analysis. All these long chain alkyl imidazolium based ILs were applied in the synthesis of nickel nanoparticles via chemical decomposition of an organometallic precursor of nickel. In these media, spontaneous decomposition of Ni(COD) 2 (COD = 1,5-cyclooctadiene) in the absence of H 2 occurred giving small NPs (≤4 nm) with narrow size distributions. Interestingly, formation of regularly interspaced NP arrays was also observed in longmore » chain ILs. Lastly, such array formation could be interesting for potential applications such as carbon nanotube growth.« less
NASA Astrophysics Data System (ADS)
Ampatzidis, Dimitrios; König, Rolf; Glaser, Susanne; Heinkelmann, Robert; Schuh, Harald; Flechtner, Frank; Nilsson, Tobias
2016-04-01
The aim of our study is to assess the classical Helmert similarity transformation using the Velocity Decomposition Analysis (VEDA). The VEDA is a new methodology, developed by GFZ for the assessment of the reference frames' temporal variation and it is based on the separation of the velocities into two specified parts: The first is related to the reference system choice (the so called datum effect) and the latter one which refers to the real deformation of the terrestrial points. The advantage of the VEDA is its ability to detect the relative biases and reference system effects between two different frames or two different realizations of the same frame, respectively. We apply the VEDA for the assessment between several modern tectonic plate models and the recent global terrestrial reference frames.
NASA Astrophysics Data System (ADS)
Ikeda, Hayato; Nagaoka, Ryo; Lafond, Maxime; Yoshizawa, Shin; Iwasaki, Ryosuke; Maeda, Moe; Umemura, Shin-ichiro; Saijo, Yoshifumi
2018-07-01
High-intensity focused ultrasound is a noninvasive treatment applied by externally irradiating ultrasound to the body to coagulate the target tissue thermally. Recently, it has been proposed as a noninvasive treatment for vascular occlusion to replace conventional invasive treatments. Cavitation bubbles generated by the focused ultrasound can accelerate the effect of thermal coagulation. However, the tissues surrounding the target may be damaged by cavitation bubbles generated outside the treatment area. Conventional methods based on Doppler analysis only in the time domain are not suitable for monitoring blood flow in the presence of cavitation. In this study, we proposed a novel filtering method based on the differences in spatiotemporal characteristics, to separate tissue, blood flow, and cavitation by employing singular value decomposition. Signals from cavitation and blood flow were extracted automatically using spatial and temporal covariance matrices.
Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem
NASA Technical Reports Server (NTRS)
Deissler, Robert G.
1992-01-01
Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.
A study of stationarity in time series by using wavelet transform
NASA Astrophysics Data System (ADS)
Dghais, Amel Abdoullah Ahmed; Ismail, Mohd Tahir
2014-07-01
In this work the core objective is to apply discrete wavelet transform (DWT) functions namely Haar, Daubechies, Symmlet, Coiflet and discrete approximation of the meyer wavelets in non-stationary financial time series data from US stock market (DJIA30). The data consists of 2048 daily data of closing index starting from December 17, 2004 until October 23, 2012. From the unit root test the results show that the data is non stationary in the level. In order to study the stationarity of a time series, the autocorrelation function (ACF) is used. Results indicate that, Haar function is the lowest function to obtain noisy series as compared to Daubechies, Symmlet, Coiflet and discrete approximation of the meyer wavelets. In addition, the original data after decomposition by DWT is less noisy series than decomposition by DWT for return time series.
Adaptive multigrid domain decomposition solutions for viscous interacting flows
NASA Technical Reports Server (NTRS)
Rubin, Stanley G.; Srinivasan, Kumar
1992-01-01
Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation.
NASA Astrophysics Data System (ADS)
Yaqub, Asim; Isa, Mohamed Hasnain; Ajab, Huma; Kutty, S. R. M.; Ezechi, Ezerie H.; Farooq, Robina
2018-04-01
In this study IrO2 (Iridium oxide) was coated onto a titanium plate anode from a dilute (50 mg/10 ml) IrCl3×H2O salt solution. Coating was done at high temperature (550∘C) using thermal decomposition. Surface morphology and characteristics of coated surface of Ti/IrO2 anode were examined by FESEM and XRD. The coated anode was applied for electrochemical removal of organic pollutants from synthetic water samples in 100 mL compartment of batch electrochemical cell. About 50% COD removal was obtained at anode prepared with low Ir content solution while 72% COD removal was obtained with anode prepared at high Ir content. Maximum COD removal was obtained at 10 mA/cm2 current density.
Trading strategy based on dynamic mode decomposition: Tested in Chinese stock market
NASA Astrophysics Data System (ADS)
Cui, Ling-xiao; Long, Wen
2016-11-01
Dynamic mode decomposition (DMD) is an effective method to capture the intrinsic dynamical modes of complex system. In this work, we adopt DMD method to discover the evolutionary patterns in stock market and apply it to Chinese A-share stock market. We design two strategies based on DMD algorithm. The strategy which considers only timing problem can make reliable profits in a choppy market with no prominent trend while fails to beat the benchmark moving-average strategy in bull market. After considering the spatial information from spatial-temporal coherent structure of DMD modes, we improved the trading strategy remarkably. Then the DMD strategies profitability is quantitatively evaluated by performing SPA test to correct the data-snooping effect. The results further prove that DMD algorithm can model the market patterns well in sideways market.
Mass decomposition of galaxies using DECA software package
NASA Astrophysics Data System (ADS)
Mosenkov, A. V.
2014-01-01
The new DECA software package, which is designed to perform photometric analysis of the images of disk and elliptical galaxies having a regular structure, is presented. DECA is written in Python interpreted language and combines the capabilities of several widely used packages for astronomical data processing such as IRAF, SExtractor, and the GALFIT code used to perform two-dimensional decomposition of galaxy images into several photometric components (bulge+disk). DECA has the advantage that it can be applied to large samples of galaxies with different orientations with respect to the line of sight (including edge-on galaxies) and requires minimum human intervention. Examples of using the package to study a sample of simulated galaxy images and a sample of real objects are shown to demonstrate that DECA can be a reliable tool for the study of the structure of galaxies.
Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui
2017-12-01
Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.
Hou, Keyong; Li, Jinxu; Qu, Tuanshuai; Tang, Bin; Zhu, Liping; Huang, Yunguang; Li, Haiyang
2016-08-01
Sulfur hexafluoride (SF6 ) gas-insulated switchgear (GIS) is an essential piece of electrical equipment in a substation, and the concentration of the SF6 decomposition products are directly relevant to the security and reliability of the substation. The detection of SF6 decomposition products can be used to diagnosis the condition of the GIS. The decomposition products of SO2 , SO2 F2 , and SOF2 were selected as indicators for the diagnosis. A suitcase time-of-flight mass spectrometer (TOFMS) was designed to perform online GIS failure analysis. An RF VUV lamp was used as the photoelectron ion source; the sampling inlet, ion einzel lens, and vacuum system were well designed to improve the performance. The limit of detection (LOD) of SO2 and SO2 F2 within 200 s was 1 ppm, and the sensitivity was estimated to be at least 10-fold more sensitive than the previous design. The high linearity of SO2 , SO2 F2 in the range of 5-100 ppm has excellent linear correlation coefficient R(2) at 0.9951 and 0.9889, respectively. The suitcase TOFMS using orthogonal acceleration and reflecting mass analyzer was developed. It has the size of 663 × 496 × 338 mm and a weight of 34 kg including the battery and consumes only 70 W. The suitcase TOFMS was applied to analyze real decomposition products of SF6 inside a GIS and succeeded in finding out the hidden dangers. The suitcase TOFMS has wide application prospects for establishing an early-warning for the failure of the GIS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Enhanced development of a catalyst chamber for the decomposition of up to 1.0 kg/s hydrogen peroxide
NASA Astrophysics Data System (ADS)
Božić, Ognjan; Porrmann, Dennis; Lancelle, Daniel; May, Stefan
2016-06-01
A new innovative hybrid rocket engine concept is developed within the AHRES program of the German Aerospace Center (DLR). This rocket engine based on hydroxyl-terminated polybutadiene (HTPB) with metallic additives as solid fuel and high test peroxide (HTP) as liquid oxidizer. Instead of a conventional ignition system, a catalyst chamber with a silver mesh catalyst is designed to decompose the HTP. The newly modified catalyst chamber is able to decompose up to 1.0 kg/s of 87.5 wt% HTP. Used as a monopropellant thruster, this equals an average thrust of 1600 N. The catalyst chamber is designed using the self-developed software tool SHAKIRA. The applied kinetic law, which determines catalytic decomposition of HTP within the catalyst chamber, is given and commented. Several calculations are carried out to determine the appropriate geometry for complete decomposition with a minimum of catalyst material. A number of tests under steady state conditions are carried out, using 87.5 wt% HTP with different flow rates and a constant amount of catalyst material. To verify the decomposition, the temperature is measured and compared with the theoretical prediction. The experimental results show good agreement with the results generated by the design tool. The developed catalyst chamber provides a simple, reliable ignition system for hybrid rocket propulsion systems based on hydrogen peroxide as oxidizer. This system is capable for multiple reignition. The developed hardware and software can be used to design full scale monopropellant thrusters based on HTP and catalyst chambers for hybrid rocket engines.
NASA Astrophysics Data System (ADS)
Lafare, Antoine E. A.; Peach, Denis W.; Hughes, Andrew G.
2016-02-01
The daily groundwater level (GWL) response in the Permo-Triassic Sandstone aquifers in the Eden Valley, England (UK), has been studied using the seasonal trend decomposition by LOESS (STL) technique. The hydrographs from 18 boreholes in the Permo-Triassic Sandstone were decomposed into three components: seasonality, general trend and remainder. The decomposition was analysed first visually, then using tools involving a variance ratio, time-series hierarchical clustering and correlation analysis. Differences and similarities in decomposition pattern were explained using the physical and hydrogeological information associated with each borehole. The Penrith Sandstone exhibits vertical and horizontal heterogeneity, whereas the more homogeneous St Bees Sandstone groundwater hydrographs characterize a well-identified seasonality; however, exceptions can be identified. A stronger trend component is obtained in the silicified parts of the northern Penrith Sandstone, while the southern Penrith, containing Brockram (breccias) Formation, shows a greater relative variability of the seasonal component. Other boreholes drilled as shallow/deep pairs show differences in responses, revealing the potential vertical heterogeneities within the Penrith Sandstone. The differences in bedrock characteristics between and within the Penrith and St Bees Sandstone formations appear to influence the GWL response. The de-seasonalized and de-trended GWL time series were then used to characterize the response, for example in terms of memory effect (autocorrelation analysis). By applying the STL method, it is possible to analyse GWL hydrographs leading to better conceptual understanding of the groundwater flow. Thus, variation in groundwater response can be used to gain insight into the aquifer physical properties and understand differences in groundwater behaviour.
NASA Astrophysics Data System (ADS)
Lehtola, Susi; Tubman, Norm M.; Whaley, K. Birgitta; Head-Gordon, Martin
2017-10-01
Approximate full configuration interaction (FCI) calculations have recently become tractable for systems of unforeseen size, thanks to stochastic and adaptive approximations to the exponentially scaling FCI problem. The result of an FCI calculation is a weighted set of electronic configurations, which can also be expressed in terms of excitations from a reference configuration. The excitation amplitudes contain information on the complexity of the electronic wave function, but this information is contaminated by contributions from disconnected excitations, i.e., those excitations that are just products of independent lower-level excitations. The unwanted contributions can be removed via a cluster decomposition procedure, making it possible to examine the importance of connected excitations in complicated multireference molecules which are outside the reach of conventional algorithms. We present an implementation of the cluster decomposition analysis and apply it to both true FCI wave functions, as well as wave functions generated from the adaptive sampling CI algorithm. The cluster decomposition is useful for interpreting calculations in chemical studies, as a diagnostic for the convergence of various excitation manifolds, as well as as a guidepost for polynomially scaling electronic structure models. Applications are presented for (i) the double dissociation of water, (ii) the carbon dimer, (iii) the π space of polyacenes, and (iv) the chromium dimer. While the cluster amplitudes exhibit rapid decay with an increasing rank for the first three systems, even connected octuple excitations still appear important in Cr2, suggesting that spin-restricted single-reference coupled-cluster approaches may not be tractable for some problems in transition metal chemistry.
Alum treatment of poultry litter: decomposition and nitrogen dynamics.
Gilmour, J T; Koehler, M A; Cabrera, M L; Szajdak, L; Moore, P A
2004-01-01
While the poultry industry is a major economic benefit to several areas in the USA, land application of poultry litter to recycle nutrients can lead to impaired surface and ground water quality. Amending poultry litter with alum [Al3(SO4)2 x 14H2O] has received considerable attention as a method of economically reducing ammonia volatilization in the poultry house and soluble phosphorus in runoff waters. The objective of this study was to characterize the effect of alum on broiler litter decomposition and N dynamics under laboratory conditions. Litter that had been amended with alum in the poultry house after each of the first four of five flock cycles (Experiment I) and litter that had been amended with alum after removal from a poultry house after the third flock cycle (Experiment II) were compared with unamended litter in separate studies. The litters in Experiment I were surface-applied to simulate application to grasslands, while the litters in Experiment II were incorporated to simulate application to conventionally tilled crops. The only statistically significant differences in decomposition due to alum occurred early in Experiment II and the differences were small. The only statistically significant differences in net N mineralization, soil inorganic N, and soil NH4+-N in either experiment was found in Experiment I after 70 d of incubation where soil inorganic N was significantly greater for the alum treatment. Thus, alum had little effect on decomposition or N dynamics. Results of many of the studies on litter not amended with alum should be applicable to litters amended with alum to reduce P availability.
Evaluating the performance of distributed approaches for modal identification
NASA Astrophysics Data System (ADS)
Krishnan, Sriram S.; Sun, Zhuoxiong; Irfanoglu, Ayhan; Dyke, Shirley J.; Yan, Guirong
2011-04-01
In this paper two modal identification approaches appropriate for use in a distributed computing environment are applied to a full-scale, complex structure. The natural excitation technique (NExT) is used in conjunction with a condensed eigensystem realization algorithm (ERA), and the frequency domain decomposition with peak-picking (FDD-PP) are both applied to sensor data acquired from a 57.5-ft, 10 bay highway sign truss structure. Monte-Carlo simulations are performed on a numerical example to investigate the statistical properties and sensitivity to noise of the two distributed algorithms. Experimental results are provided and discussed.
Elastic and acoustic wavefield decompositions and application to reverse time migrations
NASA Astrophysics Data System (ADS)
Wang, Wenlong
P- and S-waves coexist in elastic wavefields, and separation between them is an essential step in elastic reverse-time migrations (RTMs). Unlike the traditional separation methods that use curl and divergence operators, which do not preserve the wavefield vector component information, we propose and compare two vector decomposition methods, which preserve the same vector components that exist in the input elastic wavefield. The amplitude and phase information is automatically preserved, so no amplitude or phase corrections are required. The decoupled propagation method is extended from elastic to viscoelastic wavefields. To use the decomposed P and S vector wavefields and generate PP and PS images, we create a new 2D migration context for isotropic, elastic RTM which includes PS vector decomposition; the propagation directions of both incident and reflected P- and S-waves are calculated directly from the stress and particle velocity definitions of the decomposed P- and S-wave Poynting vectors. Then an excitation-amplitude image condition that scales the receiver wavelet by the source vector magnitude produces angle-dependent images of PP and PS reflection coefficients with the correct polarities, polarization, and amplitudes. It thus simplifies the process of obtaining PP and PS angle-domain common-image gathers (ADCIGs); it is less effort to generate ADCIGs from vector data than from scalar data. Besides P- and S-waves decomposition, separations of up- and down-going waves are also a part of processing of multi-component recorded data and propagating wavefields. A complex trace based up/down separation approach is extended from acoustic to elastic, and combined with P- and S-wave decomposition by decoupled propagation. This eliminates the need for a Fourier transform over time, thereby significantly reducing the storage cost and improving computational efficiency. Wavefield decomposition is applied to both synthetic elastic VSP data and propagating wavefield snapshots. Poynting vectors obtained from the particle-velocity and stress fields after P/S and up/down decompositions are much more accurate than those without. The up/down separation algorithm is also applicable in acoustic RTMs, where both (forward-time extrapolated) source and (reverse-time extrapolated) receiver wavefields are decomposed into up-going and down-going parts. Together with the crosscorrelation imaging condition, four images (down-up, up-down, up-up and down-down) are generated, which facilitate the analysis of artifacts and the imaging ability of the four images. Artifacts may exist in all the decomposed images, but their positions and types are different. The causes of artifacts in different images are explained and illustrated with sketches and numerical tests.
Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A
2018-05-01
The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Williams, E. K.; Rosenheim, B. E.
2011-12-01
Ramped pyrolysis methodology, such as that used in the programmed-temperature pyrolysis/combustion system (PTP/CS), improves radiocarbon analysis of geologic materials devoid of authigenic carbonate compounds and with low concentrations of extractable authochthonous organic molecules. The approach has improved sediment chronology in organic-rich sediments proximal to Antarctic ice shelves (Rosenheim et al., 2008) and constrained the carbon sequestration potential of suspended sediments in the lower Mississippi River (Roe et al., in review). Although ramped pyrolysis allows for separation of sedimentary organic material based upon relative reactivity, chemical information (i.e. chemical composition of pyrolysis products) is lost during the in-line combustion of pyrolysis products. A first order approximation of ramped pyrolysis/combustion system CO2 evolution, employing a simple Gaussian decomposition routine, has been useful (Rosenheim et al., 2008), but improvements may be possible. First, without prior compound-specific extractions, the molecular composition of sedimentary organic matter is unknown and/or unidentifiable. Second, even if determined as constituents of sedimentary organic material, many organic compounds have unknown or variable decomposition temperatures. Third, mixtures of organic compounds may result in significant chemistry within the pyrolysis reactor, prior to introduction of oxygen along the flow path. Gaussian decomposition of the reaction rate may be too simple to fully explain the combination of these factors. To relate both the radiocarbon age over different temperature intervals and the pyrolysis reaction thermograph (temperature (°C) vs. CO2 evolved (μmol)) obtained from PTP/CS to chemical composition of sedimentary organic material, we present a modeling framework developed based upon the ramped pyrolysis decomposition of simple mixtures of organic compounds (i.e. cellulose, lignin, plant fatty acids, etc.) often found in sedimentary organic material to account for changes in thermograph shape. The decompositions will be compositionally verified by 13C NMR analysis of pyrolysis residues from interrupted reactions. This will allow for constraint of decomposition temperatures of individual compounds as well as chemical reactions between volatilized moieties in mixtures of these compounds. We will apply this framework with 13C NMR analysis of interrupted pyrolysis residues and radiocarbon data from PTP/CS analysis of sedimentary organic material from a freshwater marsh wetland in Barataria Bay, Louisiana. We expect to characterize the bulk chemical composition during pyrolysis and as well as diagenetic changes with depth. Most importantly, we expect to constrain the potential and the limitations of this modeling framework for application to other depositional environments.
NASA Astrophysics Data System (ADS)
Li, Hongguang; Li, Ming; Li, Cheng; Li, Fucai; Meng, Guang
2017-09-01
This paper dedicates on the multi-faults decoupling of turbo-expander rotor system using Differential-based Ensemble Empirical Mode Decomposition (DEEMD). DEEMD is an improved version of DEMD to resolve the imperfection of mode mixing. The nonlinear behaviors of the turbo-expander considering temperature gradient with crack, rub-impact and pedestal looseness faults are investigated respectively, so that the baseline for the multi-faults decoupling can be established. DEEMD is then utilized on the vibration signals of the rotor system with coupling faults acquired by numerical simulation, and the results indicate that DEEMD can successfully decouple the coupling faults, which is more efficient than EEMD. DEEMD is also applied on the vibration signal of the misalignment coupling with rub-impact fault obtained during the adjustment of the experimental system. The conclusion shows that DEEMD can decompose the practical multi-faults signal and the industrial prospect of DEEMD is verified as well.
Linear dynamical modes as new variables for data-driven ENSO forecast
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen
2018-05-01
A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.
A direct method for unfolding the resolution function from measurements of neutron induced reactions
NASA Astrophysics Data System (ADS)
Žugec, P.; Colonna, N.; Sabate-Gilarte, M.; Vlachoudis, V.; Massimi, C.; Lerendegui-Marco, J.; Stamatopoulos, A.; Bacak, M.; Warren, S. G.; n TOF Collaboration
2017-12-01
The paper explores the numerical stability and the computational efficiency of a direct method for unfolding the resolution function from the measurements of the neutron induced reactions. A detailed resolution function formalism is laid out, followed by an overview of challenges present in a practical implementation of the method. A special matrix storage scheme is developed in order to facilitate both the memory management of the resolution function matrix, and to increase the computational efficiency of the matrix multiplication and decomposition procedures. Due to its admirable computational properties, a Cholesky decomposition is at the heart of the unfolding procedure. With the smallest but necessary modification of the matrix to be decomposed, the method is successfully applied to system of 105 × 105. However, the amplification of the uncertainties during the direct inversion procedures limits the applicability of the method to high-precision measurements of neutron induced reactions.
Thermokinetic analysis and product characterization of Medium Density Fiberboard pyrolysis.
Aslan, Dilan Irmak; Özoğul, Buğçe; Ceylan, Selim; Geyikçi, Feza
2018-06-01
This study investigates the pyrolysis of Medium Density Fiberboard (MDF) as a potential waste management solution. Thermal behaviour of MDF was analysed via TG/DSC. The primary decomposition step occurred between 190 °C and 425 °C. Evolved gaseous products over this step were evaluated by a FTIR spectrometer coupled with TGA. Peaks for phenolic, alcohols and aldehydes were detected at the maximum decomposition temperature. Py-GC/MS analysis revealed phenols, ketones and cyclic compounds as the primary non-condensable pyrolysis products. The kinetics of pyrolysis were investigated by the widely applied Distributed Activation Energy Model, resulting in an average activation energy and pre-exponential factor of 127.40 kJ mol -1 and 8.4E+11. The results of this study suggest that pyrolyzing MDF could potentially provide renewable fuels and prevent environmental problems related with MDF disposal. Copyright © 2018 Elsevier Ltd. All rights reserved.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
Data-driven Analysis and Prediction of Arctic Sea Ice
NASA Astrophysics Data System (ADS)
Kondrashov, D. A.; Chekroun, M.; Ghil, M.; Yuan, X.; Ting, M.
2015-12-01
We present results of data-driven predictive analyses of sea ice over the main Arctic regions. Our approach relies on the Multilayer Stochastic Modeling (MSM) framework of Kondrashov, Chekroun and Ghil [Physica D, 2015] and it leads to prognostic models of sea ice concentration (SIC) anomalies on seasonal time scales.This approach is applied to monthly time series of leading principal components from the multivariate Empirical Orthogonal Function decomposition of SIC and selected climate variables over the Arctic. We evaluate the predictive skill of MSM models by performing retrospective forecasts with "no-look ahead" forup to 6-months ahead. It will be shown in particular that the memory effects included in our non-Markovian linear MSM models improve predictions of large-amplitude SIC anomalies in certain Arctic regions. Furtherimprovements allowed by the MSM framework will adopt a nonlinear formulation, as well as alternative data-adaptive decompositions.
Characterising laser beams with liquid crystal displays
NASA Astrophysics Data System (ADS)
Dudley, Angela; Naidoo, Darryl; Forbes, Andrew
2016-02-01
We show how one can determine the various properties of light, from the modal content of laser beams to decoding the information stored in optical fields carrying orbital angular momentum, by performing a modal decomposition. Although the modal decomposition of light has been known for a long time, applied mostly to pattern recognition, we illustrate how this technique can be implemented with the use of liquid-crystal displays. We show experimentally how liquid crystal displays can be used to infer the intensity, phase, wavefront, Poynting vector, and orbital angular momentum density of unknown optical fields. This measurement technique makes use of a single spatial light modulator (liquid crystal display), a Fourier transforming lens and detector (CCD or photo-diode). Such a diagnostic tool is extremely relevant to the real-time analysis of solid-state and fibre laser systems as well as mode division multiplexing as an emerging technology in optical communication.
Soil Organic Matter recovery on eroding alluvial surfaces on Iceland
NASA Astrophysics Data System (ADS)
Kuhn, N. J.; Würsch, M.; Hunziker, M.; Şórsson, J.
2012-04-01
Soil Erosion has been assessed to have no significant effect on greenhouse gas releases due to the balance between decomposition, burial, and uptake from the atmosphere through photosynthesis by vegetation and subsequent litter decomposition. The validity of the "zero-emission" balance of soil erosion is limited to sites where vegetation growth is not limited by soil degradation. In this study, the recovery of soil organic matter on sites subject to severe erosion and subsequent soil reclamation by the introduction of Lupinus nootkatensis is studied. Preliminary results indicate that the recovery is extremely slow (scale of decades). In particular, an incipient soil development, including the availability of freely available nitrogen, appear to limit the establishment of a closed vegetation cover. These results therefore indicate that in situations where land degradation leads to a complete destruction of the fertile soil layer, the assumption of dynamic replacement of eroded soil Carbon stocks cannot be applied.
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Cheung, Samson; Li, Jui-Lin F.; Wu, Yu-ling
2013-01-01
In this study, we discuss the performance of the parallel ensemble empirical mode decomposition (EMD) in the analysis of tropical waves that are associated with tropical cyclone (TC) formation. To efficiently analyze high-resolution, global, multiple-dimensional data sets, we first implement multilevel parallelism into the ensemble EMD (EEMD) and obtain a parallel speedup of 720 using 200 eight-core processors. We then apply the parallel EEMD (PEEMD) to extract the intrinsic mode functions (IMFs) from preselected data sets that represent (1) idealized tropical waves and (2) large-scale environmental flows associated with Hurricane Sandy (2012). Results indicate that the PEEMD is efficient and effective in revealing the major wave characteristics of the data, such as wavelengths and periods, by sifting out the dominant (wave) components. This approach has a potential for hurricane climate study by examining the statistical relationship between tropical waves and TC formation.
Scattering amplitudes from multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2012-11-01
We show that the evaluation of scattering amplitudes can be formulated as a problem of multivariate polynomial division, with the components of the integration-momenta as indeterminates. We present a recurrence relation which, independently of the number of loops, leads to the multi-particle pole decomposition of the integrands of the scattering amplitudes. The recursive algorithm is based on the weak Nullstellensatz theorem and on the division modulo the Gröbner basis associated to all possible multi-particle cuts. We apply it to dimensionally regulated one-loop amplitudes, recovering the well-known integrand-decomposition formula. Finally, we focus on the maximum-cut, defined as a system of on-shell conditions constraining the components of all the integration-momenta. By means of the Finiteness Theorem and of the Shape Lemma, we prove that the residue at the maximum-cut is parametrized by a number of coefficients equal to the number of solutions of the cut itself.
Entanglement branching operator
NASA Astrophysics Data System (ADS)
Harada, Kenji
2018-01-01
We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.
NASA Astrophysics Data System (ADS)
Emge, Darren K.; Adalı, Tülay
2014-06-01
As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.
Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.
2001-01-01
A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.
NASA Astrophysics Data System (ADS)
Fu, Yao; Song, Jeong-Hoon
2014-08-01
Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.
A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations
NASA Technical Reports Server (NTRS)
Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos
2009-01-01
A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.
Divergence-free approach for obtaining decompositions of quantum-optical processes
NASA Astrophysics Data System (ADS)
Sabapathy, K. K.; Ivan, J. S.; García-Patrón, R.; Simon, R.
2018-02-01
Operator-sum representations of quantum channels can be obtained by applying the channel to one subsystem of a maximally entangled state and deploying the channel-state isomorphism. However, for continuous-variable systems, such schemes contain natural divergences since the maximally entangled state is ill defined. We introduce a method that avoids such divergences by utilizing finitely entangled (squeezed) states and then taking the limit of arbitrary large squeezing. Using this method, we derive an operator-sum representation for all single-mode bosonic Gaussian channels where a unique feature is that both quantum-limited and noisy channels are treated on an equal footing. This technique facilitates a proof that the rank-1 Kraus decomposition for Gaussian channels at its respective entanglement-breaking thresholds, obtained in the overcomplete coherent-state basis, is unique. The methods could have applications to simulation of continuous-variable channels.
Forecasting stochastic neural network based on financial empirical mode decomposition.
Wang, Jie; Wang, Jun
2017-06-01
In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Weissenberger, S.; Cuk, S. M.
1973-01-01
This report presents the development and description of the decomposition aggregation approach to stability investigations of high dimension mathematical models of dynamic systems. The high dimension vector differential equation describing a large dynamic system is decomposed into a number of lower dimension vector differential equations which represent interconnected subsystems. Then a method is described by which the stability properties of each subsystem are aggregated into a single vector Liapunov function, representing the aggregate system model, consisting of subsystem Liapunov functions as components. A linear vector differential inequality is then formed in terms of the vector Liapunov function. The matrix of the model, which reflects the stability properties of the subsystems and the nature of their interconnections, is analyzed to conclude over-all system stability characteristics. The technique is applied in detail to investigate the stability characteristics of a dynamic model of a hypothetical spinning Skylab.
Thermodynamic properties of water in confined environments: a Monte Carlo study
NASA Astrophysics Data System (ADS)
Gladovic, Martin; Bren, Urban; Urbic, Tomaž
2018-05-01
Monte Carlo simulations of Mercedes-Benz water in a crowded environment were performed. The simulated systems are representative of both composite, porous or sintered materials and living cells with typical matrix packings. We studied the influence of overall temperature as well as the density and size of matrix particles on water density, particle distributions, hydrogen bond formation and thermodynamic quantities. Interestingly, temperature and space occupancy of matrix exhibit a similar effect on water properties following the competition between the kinetic and the potential energy of the system, whereby temperature increases the kinetic and matrix packing decreases the potential contribution. A novel thermodynamic decomposition approach was applied to gain insight into individual contributions of different types of inter-particle interactions. This decomposition proved to be useful and in good agreement with the total thermodynamic quantities especially at higher temperatures and matrix packings, where higher-order potential-energy mixing terms lose their importance.
Performance of the Wavelet Decomposition on Massively Parallel Architectures
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)
2001-01-01
Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.
Intrinsic Decomposition of The Stretch Tensor for Fibrous Media
NASA Astrophysics Data System (ADS)
Kellermann, David C.
2010-05-01
This paper presents a novel mechanism for the description of fibre reorientation based on the decomposition of the stretch tensor according to a given material's intrinsic constitutive properties. This approach avoids the necessity for fibre directors, structural tensors or specialised model such as the ideal fibre reinforced model, which are commonly applied to the analysis of fibre kinematics in the finite deformation of fibrous media for biomechanical problems. The proposed approach uses Intrinsic-Field Tensors (IFTs) that build upon the linear orthotropic theory presented in a previous paper entitled Strongly orthotropic continuum mechanics and finite element treatment. The intrinsic decomposition of the stretch tensor therein provides superior capacity to represent the intermediary kinematics driven by finite orthotropic ratios, where the benefits are predominantly expressed in cases of large deformation as is typical in the biomechanical studies. Satisfaction of requirements such as Material Frame-Indifference (MFI) and Euclidean objectivity are demonstrated here—these factors being necessary for the proposed IFTs to be valid tensorial quantities. The resultant tensors, initially for the simplest case of linear elasticity, are able to describe the same fibre reorientation as would the contemporary approaches such as with use of structural tensors and the like, while additionally being capable of showing results intermediary to classical isotropy and the infinitely orthotropic representations. This intermediary case is previously unreported.
Sharma, Jitendra Kumar; Srivastava, Pratibha; Ameen, Sadia; Akhtar, M Shaheer; Singh, Gurdip; Yadava, Sudha
2016-06-15
The leaf extract of Azadirachta indica (Neem) plant was utilized as reducing agent for the green synthesis of Mn3O4 nanoparticles (NPs). The crystalline analysis demonstrated the typical tetragonal hausmannite crystal structure of Mn3O4, which confirmed the formation of Mn3O4 NPs without the existence of other oxides. Green synthesized Mn3O4 NPs were applied for the catalytic thermal decomposition of ammonium perchlorate (AP) and as working electrode for fabricating the chemical sensor. The excellent catalytic effect for the thermal decomposition of AP was observed by decreasing the decomposition temperature by 175 °C with single decomposing step. The fabricated chemical sensor based on green synthesized Mn3O4 NPs displayed high, reliable and reproducible sensitivity of ∼569.2 μA mM(-1) cm(-2) with reasonable limit of detection (LOD) of ∼22.1 μM and the response time of ∼10 s toward the detection of 2-butanone chemical. A relatively good linearity in the ranging from ∼20 to 160 μM was detected for Mn3O4 NPs electrode based 2-butanone chemical sensor. Copyright © 2016 Elsevier Inc. All rights reserved.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
NASA Astrophysics Data System (ADS)
Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat
2015-01-01
Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.
Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B
2017-01-07
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
NASA Astrophysics Data System (ADS)
Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.
2017-01-01
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
NASA Astrophysics Data System (ADS)
Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao
2017-02-01
Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.
NASA Astrophysics Data System (ADS)
Sellers, Michael; Lisal, Martin; Schweigert, Igor; Larentzos, James; Brennan, John
2015-06-01
In discrete particle simulations, when an atomistic model is coarse-grained, a trade-off is made: a boost in computational speed for a reduction in accuracy. Dissipative Particle Dynamics (DPD) methods help to recover accuracy in viscous and thermal properties, while giving back a small amount of computational speed. One of the most notable extensions of DPD has been the introduction of chemical reactivity, called DPD-RX. Today, pairing the current evolution of DPD-RX with a coarse-grained potential and its chemical decomposition reactions allows for the simulation of the shock behavior of energetic materials at a timescale faster than an atomistic counterpart. In 2007, Maillet et al. introduced implicit chemical reactivity in DPD through the concept of particle reactors and simulated the decomposition of liquid nitromethane. We have recently extended the DPD-RX method and have applied it to solid hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) under shock conditions using a recently developed single-site coarse-grain model and a reduced RDX decomposition mechanism. A description of the methods used to simulate RDX and its tranition to hot product gases within DPD-RX will be presented. Additionally, examples of the effect of microstructure on shock behavior will be shown. Approved for public release. Distribution is unlimited.
Xu, Kai; Wei, Dong-Qing; Chen, Xiang-Rong; Ji, Guang-Fu
2014-10-01
The Car-Parrinello molecular dynamics simulation was applied to study the thermal decomposition of solid phase nitromethane under gradual heating and fast annealing conditions. In gradual heating simulations, we found that, rather than C-N bond cleavage, intermolecular proton transfer is more likely to be the first reaction in the decomposition process. At high temperature, the first reaction in fast annealing simulation is intermolecular proton transfer leading to CH3NOOH and CH2NO2, whereas the initial chemical event at low temperature tends to be a unimolecular C-N bond cleavage, producing CH3 and NO2 fragments. It is the first time to date that the direct rupture of a C-N bond has been reported as the first reaction in solid phase nitromethane. In addition, the fast annealing simulations on a supercell at different temperatures are conducted to validate the effect of simulation cell size on initial reaction mechanisms. The results are in qualitative agreement with the simulations on a unit cell. By analyzing the time evolution of some molecules, we also found that the time of first water molecule formation is clearly sensitive to heating rates and target temperatures when the first reaction is an intermolecular proton transfer.
NASA Astrophysics Data System (ADS)
Kim, Jonghoon; Cho, B. H.
2014-08-01
This paper introduces an innovative approach to analyze electrochemical characteristics and state-of-health (SOH) diagnosis of a Li-ion cell based on the discrete wavelet transform (DWT). In this approach, the DWT has been applied as a powerful tool in the analysis of the discharging/charging voltage signal (DCVS) with non-stationary and transient phenomena for a Li-ion cell. Specifically, DWT-based multi-resolution analysis (MRA) is used for extracting information on the electrochemical characteristics in both time and frequency domain simultaneously. Through using the MRA with implementation of the wavelet decomposition, the information on the electrochemical characteristics of a Li-ion cell can be extracted from the DCVS over a wide frequency range. Wavelet decomposition based on the selection of the order 3 Daubechies wavelet (dB3) and scale 5 as the best wavelet function and the optimal decomposition scale is implemented. In particular, this present approach develops these investigations one step further by showing low and high frequency components (approximation component An and detail component Dn, respectively) extracted from variable Li-ion cells with different electrochemical characteristics caused by aging effect. Experimental results show the clearness of the DWT-based approach for the reliable diagnosis of the SOH for a Li-ion cell.
Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario
NASA Astrophysics Data System (ADS)
Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.
1997-06-01
In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.
Wavelet domain textual coding of Ottoman script images
NASA Astrophysics Data System (ADS)
Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.
1996-02-01
Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.
NASA Astrophysics Data System (ADS)
Naseer, Muhammad Tayyab; Asim, Shazia
2017-10-01
Unconventional resource shales can play a critical role in economic growth throughout the world. The hydrocarbon potential of faults/fractured shales is the most significant challenge for unconventional prospect generation. The continuous wavelet transforms (CWT) of spectral decomposition (SD) technology is applied for shale gas prospects on high-resolution 3D seismic data from the Miano area in the Indus platform, SW Pakistan. Schmoker' technique reveals high-quality shales with total organic carbon (TOC) of 9.2% distributed in the western regions. The seismic amplitude, root-mean-square (RMS), and most positive curvature attributes show limited ability to resolve the prospective fractured shale components. The CWT is used to identify the hydrocarbon-bearing faulted/fractured compartments encased within the non-hydrocarbon bearing shale units. The hydrocarbon-bearing shales experience higher amplitudes (4694 dB and 3439 dB) than the non-reservoir shales (3290 dB). Cross plots between sweetness, 22 Hz spectral decomposition, and the seismic amplitudes are found more effective tools than the conventional seismic attribute mapping for discriminating the seal and reservoir elements within the incised-valley petroleum system. Rock physics distinguish the productive sediments from the non-productive sediments, suggesting the potential for future shale play exploration.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
A systematic investigation into the extraction of aluminum from coal spoil through kaolinite.
Qiao, X C; Si, P; Yu, J G
2008-11-15
This research has applied kaolin and active carbon (AC) to the investigation of the recovery of aluminum from coal spoil (CS). The kaolin, AC-containing kaolin mixture, and CS have been calcined at 500, 600, 700, 800, and 900 degrees C for 15, 30, 60, and 120 min. The transformation of kaolinite and aluminum extraction that occurred in each calcined sample have been characterized using XRD, TG, IR, and hydrochloric acid leaching methods. The dehydroxylation of kaolinite and the decomposition of metakaolin were influenced by thermal treatment temperature and time. The metakaolin had kept a portion of OH- in its structure until it was calcined at a temperature of 800 degrees C. Under 60 min treatment, new SiO2 phase was able to be formed at 500 degrees C, kaolinite was totally converted to metakaolin at 600 degrees C, and the SiO2 rejoined the reaction at 800 degrees C. The decompositions of CS were similar to those of kaolin mixture containing 20 wt % AC (MKC). The combustion of combustible matter accelerated the decomposition of kaolinite in the CS and MKC. Higher AC content led to lower aluminum extraction. The treatment at 600 degrees C was optimal for both CS and MKC.
Wang, Deyun; Liu, Yanling; Luo, Hongyuan; Yue, Chenqiang; Cheng, Sheng
2017-01-01
Accurate PM2.5 concentration forecasting is crucial for protecting public health and atmospheric environment. However, the intermittent and unstable nature of PM2.5 concentration series makes its forecasting become a very difficult task. In order to improve the forecast accuracy of PM2.5 concentration, this paper proposes a hybrid model based on wavelet transform (WT), variational mode decomposition (VMD) and back propagation (BP) neural network optimized by differential evolution (DE) algorithm. Firstly, WT is employed to disassemble the PM2.5 concentration series into a number of subsets with different frequencies. Secondly, VMD is applied to decompose each subset into a set of variational modes (VMs). Thirdly, DE-BP model is utilized to forecast all the VMs. Fourthly, the forecast value of each subset is obtained through aggregating the forecast results of all the VMs obtained from VMD decomposition of this subset. Finally, the final forecast series of PM2.5 concentration is obtained by adding up the forecast values of all subsets. Two PM2.5 concentration series collected from Wuhan and Tianjin, respectively, located in China are used to test the effectiveness of the proposed model. The results demonstrate that the proposed model outperforms all the other considered models in this paper. PMID:28704955
Ming, Zhu; Feng, Shicheng; Yilihamu, Ailimire; Ma, Qiang; Yang, Shengnan
2018-01-01
Fullerenes are widely produced and applied carbon nanomaterials that require a thorough investigation into their environmental hazards and risks. In this study, we compared the toxicity of pristine fullerene (C60) and carboxylated fullerene (C60-COOH) to white rot fungus Phanerochaete chrysosporium. The influence of fullerene on the weight increase, fibrous structure, ultrastructure, enzyme activity, and decomposition capability of P. chrysosporium was investigated to reflect the potential toxicity of fullerene. C60 did not change the fresh and dry weights of P. chrysosporium but C60-COOH inhibited the weight gain at high concentrations. Both C60 and C60-COOH destroyed the fibrous structure of the mycelia. The ultrastructure of P. chrysosporium was changed by C60-COOH. Pristine C60 did not affect the enzyme activity of the P. chrysosporium culture system while C60-COOH completely blocked the enzyme activity. Consequently, in the liquid culture, P. chrysosporium lost the decomposition activity at high C60-COOH concentrations. The decreased capability in degrading wood was observed for P. chrysosporium exposed to C60-COOH. Our results collectively indicate that chemical functionalization enhanced the toxicity of fullerene to white rot fungi and induced the loss of decomposition activity. The environmental risks of fullerene and its disturbance to the carbon cycle are discussed. PMID:29470407
Martín-Lara, María Ángeles; Iáñez-Rodríguez, Irene; Blázquez, Gabriel; Quesada, Lucía; Pérez, Antonio; Calero, Mónica
2017-12-01
The thermal behavior of some types of raw and lead-polluted biomasses typical in south Spain was studied by non-isothermal thermogravimetry. Experiments were carried out in nitrogen atmosphere at three heating rates (5, 10 and 20°C/min). The results of thermogravimetric tests carried out proved that the presence of lead did not change the main degradation pathways of selected biomass (almond shell (AS) and olive pomace (OP)). However, from a point of view of mass loss, lead-polluted samples showed higher decomposition temperatures and decomposition at higher rate. The determination of activation energies was performed by isoconversional methods of Flynn-Wall-Ozawa (FWO), Kissinger-Akahira-Sunose (KAS) and Friedman (FR). In general, lead-polluted samples showed lower activation energies than raw ones. Then, Coast-Redfern method was applied to determine kinetic function. The kinetic function that seems to determine the mechanism of thermal degradation of main components of all samples was nth order reaction. Finally, a model based on three parallel reactions (for three pseudocomponents) that fit to nth order reactions was evaluated. This model was appropriate to predict the pyrolysis behavior of the raw and lead-polluted samples in all pyrolysis conditions studied. Copyright © 2017 Elsevier Ltd. All rights reserved.
Song, Jiekun; Song, Qing; Zhang, Dong; Lu, Youyou; Luan, Long
2014-01-01
Carbon emissions from energy consumption of Shandong province from 1995 to 2012 are calculated. Three zero-residual decomposition models (LMDI, MRCI and Shapley value models) are introduced for decomposing carbon emissions. Based on the results, Kendall coordination coefficient method is employed for testing their compatibility, and an optimal weighted combination decomposition model is constructed for improving the objectivity of decomposition. STIRPAT model is applied to evaluate the impact of each factor on carbon emissions. The results show that, using 1995 as the base year, the cumulative effects of population, per capita GDP, energy consumption intensity, and energy consumption structure of Shandong province in 2012 are positive, while the cumulative effect of industrial structure is negative. Per capita GDP is the largest driver of the increasing carbon emissions and has a great impact on carbon emissions; energy consumption intensity is a weak driver and has certain impact on carbon emissions; population plays a weak driving role, but it has the most significant impact on carbon emissions; energy consumption structure is a weak driver of the increasing carbon emissions and has a weak impact on carbon emissions; industrial structure has played a weak inhibitory role, and its impact on carbon emissions is great. PMID:24977216
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui
2017-01-01
Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385
NASA Astrophysics Data System (ADS)
Teal, Paul D.; Eccles, Craig
2015-04-01
The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.
Cao, Buwen; Deng, Shuguang; Qin, Hua; Ding, Pingjian; Chen, Shaopeng; Li, Guanghui
2018-06-15
High-throughput technology has generated large-scale protein interaction data, which is crucial in our understanding of biological organisms. Many complex identification algorithms have been developed to determine protein complexes. However, these methods are only suitable for dense protein interaction networks, because their capabilities decrease rapidly when applied to sparse protein⁻protein interaction (PPI) networks. In this study, based on penalized matrix decomposition ( PMD ), a novel method of penalized matrix decomposition for the identification of protein complexes (i.e., PMD pc ) was developed to detect protein complexes in the human protein interaction network. This method mainly consists of three steps. First, the adjacent matrix of the protein interaction network is normalized. Second, the normalized matrix is decomposed into three factor matrices. The PMD pc method can detect protein complexes in sparse PPI networks by imposing appropriate constraints on factor matrices. Finally, the results of our method are compared with those of other methods in human PPI network. Experimental results show that our method can not only outperform classical algorithms, such as CFinder, ClusterONE, RRW, HC-PIN, and PCE-FR, but can also achieve an ideal overall performance in terms of a composite score consisting of F-measure, accuracy (ACC), and the maximum matching ratio (MMR).
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.
NASA Astrophysics Data System (ADS)
Oyama, Youichi; Matsushita, Bunkei; Fukushima, Takehiko; Matsushige, Kazuo; Imai, Akio
The remote sensing of Case 2 water has been far less successful than that of Case 1 water, due mainly to the complex interactions among optically active substances (e.g., phytoplankton, suspended sediments, colored dissolved organic matter, and water) in the former. To address this problem, we developed a spectral decomposition algorithm (SDA), based on a spectral linear mixture modeling approach. Through a tank experiment, we found that the SDA-based models were superior to conventional empirical models (e.g. using single band, band ratio, or arithmetic calculation of band) for accurate estimates of water quality parameters. In this paper, we develop a method for applying the SDA to Landsat-5 TM data on Lake Kasumigaura, a eutrophic lake in Japan characterized by high concentrations of suspended sediment, for mapping chlorophyll-a (Chl-a) and non-phytoplankton suspended sediment (NPSS) distributions. The results show that the SDA-based estimation model can be obtained by a tank experiment. Moreover, by combining this estimation model with satellite-SRSs (standard reflectance spectra: i.e., spectral end-members) derived from bio-optical modeling, we can directly apply the model to a satellite image. The same SDA-based estimation model for Chl-a concentration was applied to two Landsat-5 TM images, one acquired in April 1994 and the other in February 2006. The average Chl-a estimation error between the two was 9.9%, a result that indicates the potential robustness of the SDA-based estimation model. The average estimation error of NPSS concentration from the 2006 Landsat-5 TM image was 15.9%. The key point for successfully applying the SDA-based estimation model to satellite data is the method used to obtain a suitable satellite-SRS for each end-member.
Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.
2016-01-01
Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461
Dossa, Gbadamassi G O; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D
2016-10-04
Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11-1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition.
Zhang, Luzheng; Zybin, Sergey V; van Duin, Adri C T; Dasgupta, Siddharth; Goddard, William A; Kober, Edward M
2009-10-08
We report molecular dynamics (MD) simulations using the first-principles-based ReaxFF reactive force field to study the thermal decomposition of 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) and octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) at various densities and temperatures. TATB is known to produce a large amount (15-30%) of high-molecular-weight carbon clusters, whereas detonation of nitramines such as HMX and RDX (1,3,5-trinitroperhydro-1,3,5-triazine) generate predominantly low-molecular-weight products. In agreement with experimental observation, these simulations predict that TATB decomposition quickly (by 30 ps) initiates the formation of large carbonaceous clusters (more than 4000 amu, or approximately 15-30% of the total system mass), and HMX decomposition leads almost exclusively to small-molecule products. We find that HMX decomposes readily on this time scale at lower temperatures, for which the decomposition rate of TATB is about an order of magnitude slower. Analyzing the ReaxFF MD results leads to the detailed atomistic structure of this carbon-rich phase of TATB and allows characterization of the kinetics and chemistry related to this phase and their dependence on system density and temperature. The carbon-rich phase formed from TATB contains mainly polyaromatic rings with large oxygen content, leading to graphitic regions. We use these results to describe the initial reaction steps of thermal decomposition of HMX and TATB in terms of the rates for forming primary and secondary products, allowing comparison to experimentally derived models. These studies show that MD using the ReaxFF reactive force field provides detailed atomistic information that explains such macroscopic observations as the dramatic difference in carbon cluster formation between TATB and HMX. This shows that ReaxFF MD captures the fundamental differences in the mechanisms of such systems and illustrates how the ReaxFF may be applied to model complex chemical phenomena in energetic materials. The studies here illustrate this for modestly sized systems and modest periods; however, ReaxFF calculations of reactive processes have already been reported on systems with approximately 10(6) atoms. Thus, with suitable computational facilities, one can study the atomistic level chemical processes in complex systems under extreme conditions.
Exploration of Data Fusion between Polarimetric Radar and Multispectral Image Data
2012-09-01
target decomposition theorems in radar polarimetry . Transactions on Geoscience and Remote Sensing, 34(2), 498–518. Cloude, S. R. (1985). Target...Proceedings of the Journees Internationales De La Polarimetrie Radar (JIPR ‘90), Nantes, France. Huynen, J. R. (1965). Measurement of theTarget scattering...J. A. (2006). Review of passive imaging polarimetry for remote sensing applications. Applied Optics, 45(22), 5453–5469. Vanzyl, J., Zebker, H
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)
1993-01-01
Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.
ERIC Educational Resources Information Center
Gillet, Louis
1971-01-01
Psychological and educational measurement is carried out according to the type of model used and data collected. The H entropy which shows the dispersion of the data can be divided into intragroup and intergroup entropy. Choice of colors, sociometrical choice, and the communications are three situations where this resolution can be applied. (MF)
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Restrepo-Agudelo, Sebastian; Roldan-Vasco, Sebastian; Ramirez-Arbelaez, Lina; Cadavid-Arboleda, Santiago; Perez-Giraldo, Estefania; Orozco-Duque, Andres
2017-08-01
The visual inspection is a widely used method for evaluating the surface electromyographic signal (sEMG) during deglutition, a process highly dependent of the examiners expertise. It is desirable to have a less subjective and automated technique to improve the onset detection in swallowing related muscles, which have a low signal-to-noise ratio. In this work, we acquired sEMG measured in infrahyoid muscles with high baseline noise of ten healthy adults during water swallowing tasks. Two methods were applied to find the combination of cutoff frequencies that achieve the most accurate onset detection: discrete wavelet decomposition based method and fixed steps variations of low and high cutoff frequencies of a digital bandpass filter. Teager-Kaiser Energy operator, root mean square and simple threshold method were applied for both techniques. Results show a narrowing of the effective bandwidth vs. the literature recommended parameters for sEMG acquisition. Both level 3 decomposition with mother wavelet db4 and bandpass filter with cutoff frequencies between 130 and 180Hz were optimal for onset detection in infrahyoid muscles. The proposed methodologies recognized the onset time with predictive power above 0.95, that is similar to previous findings but in larger and more superficial muscles in limbs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Light-induced protein nitration and degradation with HONO emission
NASA Astrophysics Data System (ADS)
Meusel, Hannah; Elshorbany, Yasin; Kuhn, Uwe; Bartels-Rausch, Thorsten; Reinmuth-Selzle, Kathrin; Kampf, Christopher J.; Li, Guo; Wang, Xiaoxiang; Lelieveld, Jos; Pöschl, Ulrich; Hoffmann, Thorsten; Su, Hang; Ammann, Markus; Cheng, Yafang
2017-10-01
Proteins can be nitrated by air pollutants (NO2), enhancing their allergenic potential. This work provides insight into protein nitration and subsequent decomposition in the presence of solar radiation. We also investigated light-induced formation of nitrous acid (HONO) from protein surfaces that were nitrated either online with instantaneous gas-phase exposure to NO2 or offline by an efficient nitration agent (tetranitromethane, TNM). Bovine serum albumin (BSA) and ovalbumin (OVA) were used as model substances for proteins. Nitration degrees of about 1 % were derived applying NO2 concentrations of 100 ppb under VIS/UV illuminated conditions, while simultaneous decomposition of (nitrated) proteins was also found during long-term (20 h) irradiation exposure. Measurements of gas exchange on TNM-nitrated proteins revealed that HONO can be formed and released even without contribution of instantaneous heterogeneous NO2 conversion. NO2 exposure was found to increase HONO emissions substantially. In particular, a strong dependence of HONO emissions on light intensity, relative humidity, NO2 concentrations and the applied coating thickness was found. The 20 h long-term studies revealed sustained HONO formation, even when concentrations of the intact (nitrated) proteins were too low to be detected after the gas exchange measurements. A reaction mechanism for the NO2 conversion based on the Langmuir-Hinshelwood kinetics is proposed.
Subband/transform functions for image processing
NASA Technical Reports Server (NTRS)
Glover, Daniel
1993-01-01
Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.
Scattering property based contextual PolSAR speckle filter
NASA Astrophysics Data System (ADS)
Mullissa, Adugna G.; Tolpekin, Valentyn; Stein, Alfred
2017-12-01
Reliability of the scattering model based polarimetric SAR (PolSAR) speckle filter depends upon the accurate decomposition and classification of the scattering mechanisms. This paper presents an improved scattering property based contextual speckle filter based upon an iterative classification of the scattering mechanisms. It applies a Cloude-Pottier eigenvalue-eigenvector decomposition and a fuzzy H/α classification to determine the scattering mechanisms on a pre-estimate of the coherency matrix. The H/α classification identifies pixels with homogeneous scattering properties. A coarse pixel selection rule groups pixels that are either single bounce, double bounce or volume scatterers. A fine pixel selection rule is applied to pixels within each canonical scattering mechanism. We filter the PolSAR data and depending on the type of image scene (urban or rural) use either the coarse or fine pixel selection rule. Iterative refinement of the Wishart H/α classification reduces the speckle in the PolSAR data. Effectiveness of this new filter is demonstrated by using both simulated and real PolSAR data. It is compared with the refined Lee filter, the scattering model based filter and the non-local means filter. The study concludes that the proposed filter compares favorably with other polarimetric speckle filters in preserving polarimetric information, point scatterers and subtle features in PolSAR data.
NASA Astrophysics Data System (ADS)
Clayton, J. D.
2017-02-01
A theory of deformation of continuous media based on concepts from Finsler differential geometry is presented. The general theory accounts for finite deformations, nonlinear elasticity, and changes in internal state of the material, the latter represented by elements of a state vector of generalized Finsler space whose entries consist of one or more order parameter(s). Two descriptive representations of the deformation gradient are considered. The first invokes an additive decomposition and is applied to problems involving localized inelastic deformation mechanisms such as fracture. The second invokes a multiplicative decomposition and is applied to problems involving distributed deformation mechanisms such as phase transformations or twinning. Appropriate free energy functions are posited for each case, and Euler-Lagrange equations of equilibrium are derived. Solutions are obtained for specific problems of tensile fracture of an elastic cylinder and for amorphization of a crystal under spherical and uniaxial compression. The Finsler-based approach is demonstrated to be more general and potentially more physically descriptive than existing hyperelasticity models couched in Riemannian geometry or Euclidean space, without incorporation of supplementary ad hoc equations or spurious fitting parameters. Predictions for single crystals of boron carbide ceramic agree qualitatively, and in many instances quantitatively, with results from physical experiments and atomic simulations involving structural collapse and failure of the crystal along its c-axis.
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong
2018-04-01
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.
Component separation for cosmic microwave background radiation
NASA Astrophysics Data System (ADS)
Fernández-Cobos, R.; Vielva, P.; Barreiro, R. B.; Martínez-González, E.
2011-11-01
Cosmic microwave background (CMB) radiation data obtained by different experiments contains, besides the desired signal, a superposition of microwave sky contributions mainly due to, on the one hand, synchrotron radiation, free-free emission and re-emission of dust clouds in our galaxy; and, on the other hand, extragalactic sources. We present an analytical method, using a wavelet decomposition on the sphere, to recover the CMB signal from microwave maps. Being applied to both temperature and polarization data, it is shown as a significant powerful tool when it is used in particularly polluted regions of the sky. The applied wavelet has the advantages of requiring little computering time in its calculations being adapted to the HEALPix pixelization scheme (which is the format that the community uses to report the CMB data) and offering the possibility of multi-resolution analysis. The decomposition is implemented as part of a template fitting method, minimizing the variance of the resulting map. The method was tested with simulations of WMAP data and results have been positive, with improvements up to 12% in the variance of the resulting full sky map and about 3% in low contaminate regions. Finally, we also present some preliminary results with WMAP data in the form of an angular cross power spectrum C_ℓ^{TE}, consistent with the spectrum offered by WMAP team.
NASA Astrophysics Data System (ADS)
Lassoued, R.; Lecheheb, M.; Bonnet, G.
2012-08-01
This paper describes an analytical method for the wave field induced by a moving load on a periodically supported beam. The Green's function for an Euler beam without support is evaluated by using the direct integration. Afterwards, it introduces the supports into the model established by using the superposition principle which states that the response from all the sleeper points and from the external point force add up linearly to give a total response. The periodicity of the supports is described by Bloch's theorem. The homogeneous system thus obtained represents a linear differential equation which governs rail response. It is initially solved in the homogeneous case, and it admits a no null solution if its determinant is null, this permits the establishment the dispersion equation to Bloch waves and wave bands. The Bloch waves and dispersion curves contain all the physics of the dynamic problem and the wave field induced by a dynamic load applied to the system is finally obtained by decomposition into Bloch waves, similarly to the usual decomposition into dynamic modes on a finite structure. The method is applied to obtain the field induced by a load moving at constant velocity on a thin beam supported by periodic elastic supports.
Martinez, Sara; Marchamalo, Miguel; Alvarez, Sergio
2018-03-15
Wood has been presented as a carbon-neutral material capable of significantly contribute to climate change mitigation and has become an appealing option for the building sector. This paper presents the quantification of the organization environmental footprint of a wood parquet company. The multi-regional input-output (MRIO) database EXIOBASE was used with a further structural path analysis decomposition. The application of the proposed method quantifies 14 environmental impacts. Highly influential sectors and regions responsible for these impacts are assessed to propose efficient measures. For the parquet company studied, the highest impact category once normalized was ozone depletion and the dominant sector responsible for this impact was the chemical industry from Spain and China. The structural path decomposition related to ozone loss revealed that the indirect impacts embedded in the supply chain are higher than the direct impacts. It can be concluded that the assessment of the organizational environmental footprint can be carried out applying this well-structured and robust method. Its implementation will enable tracking of the environmental burdens through a company's supply chain at a global scale and provide information for the adoption of environmental strategies. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
Standing wave contributions to the linear interference effect in stratosphere-troposphere coupling
NASA Astrophysics Data System (ADS)
Watt-Meyer, Oliver; Kushner, Paul
2014-05-01
A body of literature by Hayashi and others [Hayashi 1973, 1977, 1979; Pratt, 1976] developed a decomposition of the wavenumber-frequency spectrum into standing and travelling waves. These techniques directly decompose the power spectrum—that is, the amplitudes squared—into standing and travelling parts. This, incorrectly, does not allow for a term representing the covariance between these waves. We propose a simple decomposition based on the 2D Fourier transform which allows one to directly compute the variance of the standing and travelling waves, as well as the covariance between them. Applying this decomposition to geopotential height anomalies in the Northern Hemisphere winter, we show the dominance of standing waves for planetary wavenumbers 1 through 3, especially in the stratosphere, and that wave-1 anomalies have a significant westward travelling component in the high-latitude (60N to 80N) troposphere. Variations in the relative zonal phasing between a wave anomaly and the background climatological wave pattern—the "linear interference" effect—are known to explain a large part of the planetary wave driving of the polar stratosphere in both hemispheres. While the linear interference effect is robust across observations, models of varying degrees of complexity, and in response to various types of perturbations, it is not well understood dynamically. We use the above-described decomposition into standing and travelling waves to investigate the drivers of linear interference. We find that the linear part of the wave activity flux is primarily driven by the standing waves, at all vertical levels. This can be understood by noting that the longitudinal positions of the antinodes of the standing waves are typically close to being aligned with the maximum and minimum of the background climatology. We discuss implications for predictability of wave activity flux, and hence polar vortex strength variability.
NASA Astrophysics Data System (ADS)
Lohmann, Timo
Electric sector models are powerful tools that guide policy makers and stakeholders. Long-term power generation expansion planning models are a prominent example and determine a capacity expansion for an existing power system over a long planning horizon. With the changes in the power industry away from monopolies and regulation, the focus of these models has shifted to competing electric companies maximizing their profit in a deregulated electricity market. In recent years, consumers have started to participate in demand response programs, actively influencing electricity load and price in the power system. We introduce a model that features investment and retirement decisions over a long planning horizon of more than 20 years, as well as an hourly representation of day-ahead electricity markets in which sellers of electricity face buyers. This combination makes our model both unique and challenging to solve. Decomposition algorithms, and especially Benders decomposition, can exploit the model structure. We present a novel method that can be seen as an alternative to generalized Benders decomposition and relies on dynamic linear overestimation. We prove its finite convergence and present computational results, demonstrating its superiority over traditional approaches. In certain special cases of our model, all necessary solution values in the decomposition algorithms can be directly calculated and solving mathematical programming problems becomes entirely obsolete. This leads to highly efficient algorithms that drastically outperform their programming problem-based counterparts. Furthermore, we discuss the implementation of all tailored algorithms and the challenges from a modeling software developer's standpoint, providing an insider's look into the modeling language GAMS. Finally, we apply our model to the Texas power system and design two electricity policies motivated by the U.S. Environment Protection Agency's recently proposed CO2 emissions targets for the power sector.
Decomposition of blackberry and broomsedge bluestem as influenced by ozone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J.S.; Chappelka, A.H.; Miller-Goodman, M.S.
Many researchers have reported on individual plant responses to O{sub 3}, but few have investigated the effects of this pollutant on ecosystem function. This investigation examined the influence of O{sub 3} on short-term (Phase 1) litter decomposition of blackberry (Rubus cuneifolus Pursh.) and broomsedge bluestem (Andropogon virginicus L.), two plant species native to early successional forest communities in the southern US. Mixed blackberry/broomsedge litter (1:1) collected from plants exposed to different levels of O{sub 3} for one growing season was placed in open-top chambers and exposed to different O{sub 3} levels of treatments for 24 weeks. Litter also was incubatedmore » in microcosms in the laboratory t 25 or 30 C to determine the effects of climate change on O{sub 3}-treated litter. Initial C and N concentrations of the collected foliage did not differ significantly among treatments for either species. Blackberry litter had approximately twice as much N as broomsedge, and when collected from 2X O{sub 3} chambers, had significantly greater permanganate lignin than the other treatments. Initial permanganate lignin concentration of blackberry, over all O{sub 3} treatments, correlated significantly with remaining mass of the litter mixture after 24 wk exposure. Litter decomposed more slowly in the 2X chambers than in the other treatment chambers, regardless of litter source. Elevated O{sub 3}-exposed litter (2X) decomposed the slowest regardless of treatment applied. There were significant temperature and time effects observed with litter decomposition: litter incubated at 30 C decomposed faster than at 25 C. The data suggest O{sub 3} may influence substrate quality and microbial activity, thus reducing the rate of litter decomposition in early successional forest communities.« less
The decomposition of deformation: New metrics to enhance shape analysis in medical imaging.
Varano, Valerio; Piras, Paolo; Gabriele, Stefano; Teresi, Luciano; Nardinocchi, Paola; Dryden, Ian L; Torromeo, Concetta; Puddu, Paolo E
2018-05-01
In landmarks-based Shape Analysis size is measured, in most cases, with Centroid Size. Changes in shape are decomposed in affine and non affine components. Furthermore the non affine component can be in turn decomposed in a series of local deformations (partial warps). If the extent of deformation between two shapes is small, the difference between Centroid Size and m-Volume increment is barely appreciable. In medical imaging applied to soft tissues bodies can undergo very large deformations, involving large changes in size. The cardiac example, analyzed in the present paper, shows changes in m-Volume that can reach the 60%. We show here that standard Geometric Morphometrics tools (landmarks, Thin Plate Spline, and related decomposition of the deformation) can be generalized to better describe the very large deformations of biological tissues, without losing a synthetic description. In particular, the classical decomposition of the space tangent to the shape space in affine and non affine components is enriched to include also the change in size, in order to give a complete description of the tangent space to the size-and-shape space. The proposed generalization is formulated by means of a new Riemannian metric describing the change in size as change in m-Volume rather than change in Centroid Size. This leads to a redefinition of some aspects of the Kendall's size-and-shape space without losing Kendall's original formulation. This new formulation is discussed by means of simulated examples using 2D and 3D platonic shapes as well as a real example from clinical 3D echocardiographic data. We demonstrate that our decomposition based approaches discriminate very effectively healthy subjects from patients affected by Hypertrophic Cardiomyopathy. Copyright © 2018 Elsevier B.V. All rights reserved.
Fan, Xing; Li, Jian; Qiu, Danqi; Zhu, Tianle
2018-04-01
Effects of carrier gas composition (N 2 /air) on NH 3 production, energy efficiency regarding NH 3 production and byproducts formation from plasma-catalytic decomposition of urea were systematically investigated using an Al 2 O 3 -packed dielectric barrier discharge (DBD) reactor at room temperature. Results show that the presence of O 2 in the carrier gas accelerates the conversion of urea but leads to less generation of NH 3 . The final yield of NH 3 in the gas phase decreased from 70.5%, 78.7%, 66.6% and 67.2% to 54.1%, 51.7%, 49.6% and 53.4% for applied voltages of 17, 19, 21 and 23kV, respectively when air was used as the carrier gas instead of N 2 . From the viewpoint of energy savings, however, air carrier gas is better than N 2 due to reduced energy consumption and increased energy efficiency for decomposition of a fixed amount of urea. Carrier gas composition has little influence on the major decomposition pathways of urea under the synergetic effects of plasma and Al 2 O 3 catalyst to give NH 3 and CO 2 as the main products. Compared to a small amount of N 2 O formed with N 2 as the carrier gas, however, more byproducts including N 2 O and NO 2 in the gas phase and NH 4 NO 3 in solid deposits were produced with air as the carrier gas, probably due to the unproductive consumption of NH 3 , the possible intermediate HNCO and even urea by the abundant active oxygen species and nitrogen oxides generated in air-DBD plasma. Copyright © 2017. Published by Elsevier B.V.
Quantitative evaluation of muscle synergy models: a single-trial task decoding approach
Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano
2013-01-01
Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space. PMID:23471195
Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long
2018-03-05
Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Li, Xiaoyan; Holobar, Ales; Gazzoni, Marco; Merletti, Roberto; Rymer, William Zev; Zhou, Ping
2015-05-01
Recent advances in high-density surface electromyogram (EMG) decomposition have made it a feasible task to discriminate single motor unit activity from surface EMG interference patterns, thus providing a noninvasive approach for examination of motor unit control properties. In the current study, we applied high-density surface EMG recording and decomposition techniques to assess motor unit firing behavior alterations poststroke. Surface EMG signals were collected using a 64-channel 2-D electrode array from the paretic and contralateral first dorsal interosseous (FDI) muscles of nine hemiparetic stroke subjects at different isometric discrete contraction levels between 2 to 10 N with a 2 N increment step. Motor unit firing rates were extracted through decomposition of the high-density surface EMG signals and compared between paretic and contralateral muscles. Across the nine tested subjects, paretic FDI muscles showed decreased motor unit firing rates compared with contralateral muscles at different contraction levels. Regression analysis indicated a linear relation between the mean motor unit firing rate and the muscle contraction level for both paretic and contralateral muscles (p < 0.001), with the former demonstrating a lower increment rate (0.32 pulses per second (pps)/N) compared with the latter (0.67 pps/N). The coefficient of variation (averaged over the contraction levels) of the motor unit firing rates for the paretic muscles (0.21 ± 0.012) was significantly higher than for the contralateral muscles (0.17 ± 0.014) (p < 0.05). This study provides direct evidence of motor unit firing behavior alterations poststroke using surface EMG, which can be an important factor contributing to hemiparetic muscle weakness.
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-01-01
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-09-09
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.
Li, Xiaoyan; Holobar, Aleš; Gazzoni, Marco; Merletti, Roberto; Rymer, William Z.; Zhou, Ping
2014-01-01
Recent advances in high density surface electromyogram (EMG) decomposition have made it a feasible task to discriminate single motor unit activity from surface EMG interference patterns, thus providing a noninvasive approach for examination of motor unit control properties. In the current study we applied high density surface EMG recording and decomposition techniques to assess motor unit firing behavior alterations post-stroke. Surface EMG signals were collected using a 64-channel 2-dimensional electrode array from the paretic and contralateral first dorsal interosseous (FDI) muscles of nine hemiparetic stroke subjects at different isometric discrete contraction levels between 2 N to 10 N with a 2 N increment step. Motor unit firing rates were extracted through decomposition of the high density surface EMG signals, and compared between paretic and contralateral muscles. Across the nine tested subjects, paretic FDI muscles showed decreased motor unit firing rates compared with contralateral muscles at different contraction levels. Regression analysis indicated a linear relation between the mean motor unit firing rate and the muscle contraction level for both paretic and contralateral muscles (p < 0.001), with the former demonstrating a lower increment rate (0.32 pulses per second (pps)/N) compared with the latter (0.67 pps/N). The coefficient of variation (CoV, averaged over the contraction levels) of the motor unit firing rates for the paretic muscles (0.21 ± 0.012) was significantly higher than for the contralateral muscles (0.17 ± 0.014) (p < 0.05). This study provides direct evidence of motor unit firing behavior alterations post-stroke using surface EMG, which can be an important factor contributing to hemiparetic muscle weakness. PMID:25389239
Application of response surface techniques to helicopter rotor blade optimization procedure
NASA Technical Reports Server (NTRS)
Henderson, Joseph Lynn; Walsh, Joanne L.; Young, Katherine C.
1995-01-01
In multidisciplinary optimization problems, response surface techniques can be used to replace the complex analyses that define the objective function and/or constraints with simple functions, typically polynomials. In this work a response surface is applied to the design optimization of a helicopter rotor blade. In previous work, this problem has been formulated with a multilevel approach. Here, the response surface takes advantage of this decomposition and is used to replace the lower level, a structural optimization of the blade. Problems that were encountered and important considerations in applying the response surface are discussed. Preliminary results are also presented that illustrate the benefits of using the response surface.
Joly, François-Xavier; Milcu, Alexandru; Scherer-Lorenzen, Michael; Jean, Loreline-Katia; Bussotti, Filippo; Dawud, Seid Muhie; Müller, Sandra; Pollastrini, Martina; Raulund-Rasmussen, Karsten; Vesterdal, Lars; Hättenschwiler, Stephan
2017-05-01
Different tree species influence litter decomposition directly through species-specific litter traits, and indirectly through distinct modifications of the local decomposition environment. Whether these indirect effects on decomposition are influenced by tree species diversity is presently not clear. We addressed this question by studying the decomposition of two common substrates, cellulose paper and wood sticks, in a total of 209 forest stands of varying tree species diversity across six major forest types at the scale of Europe. Tree species richness showed a weak but positive correlation with the decomposition of cellulose but not with that of wood. Surprisingly, macroclimate had only a minor effect on cellulose decomposition and no effect on wood decomposition despite the wide range in climatic conditions among sites from Mediterranean to boreal forests. Instead, forest canopy density and stand-specific litter traits affected the decomposition of both substrates, with a particularly clear negative effect of the proportion of evergreen tree litter. Our study suggests that species richness and composition of tree canopies modify decomposition indirectly through changes in microenvironmental conditions. These canopy-induced differences in the local decomposition environment control decomposition to a greater extent than continental-scale differences in macroclimatic conditions. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Reactivity continuum modeling of leaf, root, and wood decomposition across biomes
NASA Astrophysics Data System (ADS)
Koehler, Birgit; Tranvik, Lars J.
2015-07-01
Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.
Direct and Indirect Effects of UV-B Exposure on Litter Decomposition: A Meta-Analysis
Song, Xinzhang; Peng, Changhui; Jiang, Hong; Zhu, Qiuan; Wang, Weifeng
2013-01-01
Ultraviolet-B (UV-B) exposure in the course of litter decomposition may have a direct effect on decomposition rates via changing states of photodegradation or decomposer constitution in litter while UV-B exposure during growth periods may alter chemical compositions and physical properties of plants. Consequently, these changes will indirectly affect subsequent litter decomposition processes in soil. Although studies are available on both the positive and negative effects (including no observable effects) of UV-B exposure on litter decomposition, a comprehensive analysis leading to an adequate understanding remains unresolved. Using data from 93 studies across six biomes, this introductory meta-analysis found that elevated UV-B directly increased litter decomposition rates by 7% and indirectly by 12% while attenuated UV-B directly decreased litter decomposition rates by 23% and indirectly increased litter decomposition rates by 7%. However, neither positive nor negative effects were statistically significant. Woody plant litter decomposition seemed more sensitive to UV-B than herbaceous plant litter except under conditions of indirect effects of elevated UV-B. Furthermore, levels of UV-B intensity significantly affected litter decomposition response to UV-B (P<0.05). UV-B effects on litter decomposition were to a large degree compounded by climatic factors (e.g., MAP and MAT) (P<0.05) and litter chemistry (e.g., lignin content) (P<0.01). Results suggest these factors likely have a bearing on masking the important role of UV-B on litter decomposition. No significant differences in UV-B effects on litter decomposition were found between study types (field experiment vs. laboratory incubation), litter forms (leaf vs. needle), and decay duration. Indirect effects of elevated UV-B on litter decomposition significantly increased with decay duration (P<0.001). Additionally, relatively small changes in UV-B exposure intensity (30%) had significant direct effects on litter decomposition (P<0.05). The intent of this meta-analysis was to improve our understanding of the overall effects of UV-B on litter decomposition. PMID:23818993
Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
The effect of body size on the rate of decomposition in a temperate region of South Africa.
Sutherland, A; Myburgh, J; Steyn, M; Becker, P J
2013-09-10
Forensic anthropologists rely on the state of decomposition of a body to estimate the post-mortem-interval (PMI) which provides information about the natural events and environmental forces that could have affected the remains after death. Various factors are known to influence the rate of decomposition, among them temperature, rainfall and exposure of the body. However, conflicting reports appear in the literature on the effect of body size on the rate of decay. The aim of this project was to compare decomposition rates of large pigs (Sus scrofa; 60-90 kg), with that of small pigs (<35 kg), to assess the influence of body size on decomposition rates. For the decomposition rates of small pigs, 15 piglets were assessed three times per week over a period of three months during spring and early summer. Data collection was conducted until complete skeletonization occurred. Stages of decomposition were scored according to separate categories for each anatomical region, and the point values for each region were added to determine the total body score (TBS), which represents the overall stage of decomposition for each pig. For the large pigs, data of 15 pigs were used. Scatter plots illustrating the relationships between TBS and PMI as well as TBS and accumulated degree days (ADD) were used to assess the pattern of decomposition and to compare decomposition rates between small and large pigs. Results indicated that rapid decomposition occurs during the early stages of decomposition for both samples. Large pigs showed a plateau phase in the course of advanced stages of decomposition, during which decomposition was minimal. A similar, but much shorter plateau was reached by small pigs of >20 kg at a PMI of 20-25 days, after which decomposition commenced swiftly. This was in contrast to the small pigs of <20 kg, which showed no plateau phase and their decomposition rates were swift throughout the duration of the study. Overall, small pigs decomposed 2.82 times faster than large pigs, indicating that body size does have an effect on the rate of decomposition. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Principal component analysis of the nonlinear coupling of harmonic modes in heavy-ion collisions
NASA Astrophysics Data System (ADS)
BoŻek, Piotr
2018-03-01
The principal component analysis of flow correlations in heavy-ion collisions is studied. The correlation matrix of harmonic flow is generalized to correlations involving several different flow vectors. The method can be applied to study the nonlinear coupling between different harmonic modes in a double differential way in transverse momentum or pseudorapidity. The procedure is illustrated with results from the hydrodynamic model applied to Pb + Pb collisions at √{sN N}=2760 GeV. Three examples of generalized correlations matrices in transverse momentum are constructed corresponding to the coupling of v22 and v4, of v2v3 and v5, or of v23,v33 , and v6. The principal component decomposition is applied to the correlation matrices and the dominant modes are calculated.
Assessing the effect of different treatments on decomposition rate of dairy manure.
Khalil, Tariq M; Higgins, Stewart S; Ndegwa, Pius M; Frear, Craig S; Stöckle, Claudio O
2016-11-01
Confined animal feeding operations (CAFOs) contribute to greenhouse gas emission, but the magnitude of these emissions as a function of operation size, infrastructure, and manure management are difficult to assess. Modeling is a viable option to estimate gaseous emission and nutrient flows from CAFOs. These models use a decomposition rate constant for carbon mineralization. However, this constant is usually determined assuming a homogenous mix of manure, ignoring the effects of emerging manure treatments. The aim of this study was to measure and compare the decomposition rate constants of dairy manure in single and three-pool decomposition models, and to develop an empirical model based on chemical composition of manure for prediction of a decomposition rate constant. Decomposition rate constants of manure before and after an anaerobic digester (AD), following coarse fiber separation, and fine solids removal were determined under anaerobic conditions for single and three-pool decomposition models. The decomposition rates of treated manure effluents differed significantly from untreated manure for both single and three-pool decomposition models. In the single-pool decomposition model, AD effluent containing only suspended solids had a relatively high decomposition rate of 0.060 d(-1), while liquid with coarse fiber and fine solids removed had the lowest rate of 0.013 d(-1). In the three-pool decomposition model, fast and slow decomposition rate constants (0.25 d(-1) and 0.016 d(-1) respectively) of untreated AD influent were also significantly different from treated manure fractions. A regression model to predict the decomposition rate of treated dairy manure fitted well (R(2) = 0.83) to observed data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Data-based adjoint and H2 optimal control of the Ginzburg-Landau equation
NASA Astrophysics Data System (ADS)
Banks, Michael; Bodony, Daniel
2017-11-01
Equation-free, reduced-order methods of control are desirable when the governing system of interest is of very high dimension or the control is to be applied to a physical experiment. Two-phase flow optimal control problems, our target application, fit these criteria. Dynamic Mode Decomposition (DMD) is a data-driven method for model reduction that can be used to resolve the dynamics of very high dimensional systems and project the dynamics onto a smaller, more manageable basis. We evaluate the effectiveness of DMD-based forward and adjoint operator estimation when applied to H2 optimal control approaches applied to the linear and nonlinear Ginzburg-Landau equation. Perspectives on applying the data-driven adjoint to two phase flow control will be given. Office of Naval Research (ONR) as part of the Multidisciplinary University Research Initiatives (MURI) Program, under Grant Number N00014-16-1-2617.
ORCHIMIC (v1.0), a microbe-mediated model for soil organic matter decomposition
NASA Astrophysics Data System (ADS)
Huang, Ye; Guenet, Bertrand; Ciais, Philippe; Janssens, Ivan A.; Soong, Jennifer L.; Wang, Yilong; Goll, Daniel; Blagodatskaya, Evgenia; Huang, Yuanyuan
2018-06-01
The role of soil microorganisms in regulating soil organic matter (SOM) decomposition is of primary importance in the carbon cycle, in particular in the context of global change. Modeling soil microbial community dynamics to simulate its impact on soil gaseous carbon (C) emissions and nitrogen (N) mineralization at large spatial scales is a recent research field with the potential to improve predictions of SOM responses to global climate change. In this study we present a SOM model called ORCHIMIC, which utilizes input data that are consistent with those of global vegetation models. ORCHIMIC simulates the decomposition of SOM by explicitly accounting for enzyme production and distinguishing three different microbial functional groups: fresh organic matter (FOM) specialists, SOM specialists, and generalists, while also implicitly accounting for microbes that do not produce extracellular enzymes, i.e., cheaters. ORCHIMIC and two other organic matter decomposition models, CENTURY (based on first-order kinetics and representative of the structure of most current global soil carbon models) and PRIM (with FOM accelerating the decomposition rate of SOM), were calibrated to reproduce the observed respiration fluxes of FOM and SOM from the incubation experiments of Blagodatskaya et al. (2014). Among the three models, ORCHIMIC was the only one that effectively captured both the temporal dynamics of the respiratory fluxes and the magnitude of the priming effect observed during the incubation experiment. ORCHIMIC also effectively reproduced the temporal dynamics of microbial biomass. We then applied different idealized changes to the model input data, i.e., a 5 K stepwise increase of temperature and/or a doubling of plant litter inputs. Under 5 K warming conditions, ORCHIMIC predicted a 0.002 K-1 decrease in the C use efficiency (defined as the ratio of C allocated to microbial growth to the sum of C allocated to growth and respiration) and a 3 % loss of SOC. Under the double litter input scenario, ORCHIMIC predicted a doubling of microbial biomass, while SOC stock increased by less than 1 % due to the priming effect. This limited increase in SOC stock contrasted with the proportional increase in SOC stock as modeled by the conventional SOC decomposition model (CENTURY), which can not reproduce the priming effect. If temperature increased by 5 K and litter input was doubled, ORCHIMIC predicted almost the same loss of SOC as when only temperature was increased. These tests suggest that the responses of SOC stock to warming and increasing input may differ considerably from those simulated by conventional SOC decomposition models when microbial dynamics are included. The next step is to incorporate the ORCHIMIC model into a global vegetation model to perform simulations for representative sites and future scenarios.
Exploring Ultrafast Structural Dynamics for Energetic Enhancement or Disruption
2016-03-01
it. In a pump -push/ dump probe experiment, a secondary laser pulse (push/ dump ) is used after the initial perturbation due to the pump pulse. The...increased. The pump -push/ dump probe technique is a difficult experiment that requires a highly stable laser source. Ultrafast pump -probe experiments...decomposition of solids. Journal of Applied Physics. 2001;89:4156–4166. 17. Kee TW. Femtosecond pump -push-probe and pump - dump -probe spectroscopy of
Optimal Control for Quantum Driving of Two-Level Systems
NASA Astrophysics Data System (ADS)
Qi, Xiao-Qiu
2018-01-01
In this paper, the optimal quantum control of two-level systems is studied by the decompositions of SU(2). Using the Pontryagin maximum principle, the minimum time of quantum control is analyzed in detail. The solution scheme of the optimal control function is given in the general case. Finally, two specific cases, which can be applied in many quantum systems, are used to illustrate the scheme, while the corresponding optimal control functions are obtained.
On a family of nonoscillatory equations y double prime = phi(x)y
NASA Technical Reports Server (NTRS)
Gingold, H.
1988-01-01
The oscillation or nonoscillation of a class of second-order linear differential equations is investigated analytically, with a focus on cases in which the functions phi(x) and y are complex-valued. Two linear transformations are introduced, and an asymptotic-decomposition procedure involving Shur triangularization is applied. The relationship of the present analysis to the nonoscillation criterion of Kneser (1896) and other more recent results is explored in two examples.
Nanomodified Carbon/Carbon Composites for Intermediate Temperature
2007-08-31
Carbon nanofibers (CNF) are manufactured by Applied Sciences Inc ./Pyrograf® Products by pyrolytic decomposition of methane in the presence of iron-based...Using PT-30 Resin," Carbon 41 (5), 893 (2003). 7. PT-15 technical data sheet, Lonza Inc ., Fair Lawn, NJ. 8. M. L. Ramirez, et al, Poly. Degrad. & Stab...technical data sheet, Carbon Nanotechnologies, Houston, TX. 32. Advanced SiC NanoPowder technical data sheet, Alpha Materials, Inc ., St. Paul, MN. 33
Rostami, Javad; Chen, Jingming; Tse, Peter W.
2017-01-01
Ultrasonic guided waves have been extensively applied for non-destructive testing of plate-like structures particularly pipes in past two decades. In this regard, if a structure has a simple geometry, obtained guided waves’ signals are easy to explain. However, any small degree of complexity in the geometry such as contacting with other materials may cause an extra amount of complication in the interpretation of guided wave signals. The problem deepens if defects have irregular shapes such as natural corrosion. Signal processing techniques that have been proposed for guided wave signals’ analysis are generally good for simple signals obtained in a highly controlled experimental environment. In fact, guided wave signals in a real situation such as the existence of natural corrosion in wall-covered pipes are much more complicated. Considering pipes in residential buildings that pass through concrete walls, in this paper we introduced Smooth Empirical Mode Decomposition (SEMD) to efficiently separate overlapped guided waves. As empirical mode decomposition (EMD) which is a good candidate for analyzing non-stationary signals, suffers from some shortcomings, wavelet transform was adopted in the sifting stage of EMD to improve its outcome in SEMD. However, selection of mother wavelet that suits best for our purpose plays an important role. Since in guided wave inspection, the incident waves are well known and are usually tone-burst signals, we tailored a complex tone-burst signal to be used as our mother wavelet. In the sifting stage of EMD, wavelet de-noising was applied to eliminate unwanted frequency components from each IMF. SEMD greatly enhances the performance of EMD in guided wave analysis for highly contaminated signals. In our experiment on concrete covered pipes with natural corrosion, this method not only separates the concrete wall indication clearly in time domain signal, a natural corrosion with complex geometry that was hidden and located inside the concrete section was successfully exposed. PMID:28178220
Analysis and visualization of single-trial event-related potentials
NASA Technical Reports Server (NTRS)
Jung, T. P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T. J.
2001-01-01
In this study, a linear decomposition technique, independent component analysis (ICA), is applied to single-trial multichannel EEG data from event-related potential (ERP) experiments. Spatial filters derived by ICA blindly separate the input data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. Both the data and their decomposition are displayed using a new visualization tool, the "ERP image," that can clearly characterize single-trial variations in the amplitudes and latencies of evoked responses, particularly when sorted by a relevant behavioral or physiological variable. These tools were used to analyze data from a visual selective attention experiment on 28 control subjects plus 22 neurological patients whose EEG records were heavily contaminated with blink and other eye-movement artifacts. Results show that ICA can separate artifactual, stimulus-locked, response-locked, and non-event-related background EEG activities into separate components, a taxonomy not obtained from conventional signal averaging approaches. This method allows: (1) removal of pervasive artifacts of all types from single-trial EEG records, (2) identification and segregation of stimulus- and response-locked EEG components, (3) examination of differences in single-trial responses, and (4) separation of temporally distinct but spatially overlapping EEG oscillatory activities with distinct relationships to task events. The proposed methods also allow the interaction between ERPs and the ongoing EEG to be investigated directly. We studied the between-subject component stability of ICA decomposition of single-trial EEG epochs by clustering components with similar scalp maps and activation power spectra. Components accounting for blinks, eye movements, temporal muscle activity, event-related potentials, and event-modulated alpha activities were largely replicated across subjects. Applying ICA and ERP image visualization to the analysis of sets of single trials from event-related EEG (or MEG) experiments can increase the information available from ERP (or ERF) data. Copyright 2001 Wiley-Liss, Inc.
Dabbs, Gretchen R
2010-10-10
An increasing number of anthropological decomposition studies are utilizing accumulated degree days (ADD) to quantify and estimate the post-mortem interval (PMI) at given decompositional stages, or the number of ADD required for certain events, such as tooth exfoliation, to occur. This study addresses the utility of retroactively applying temperature data from the closest National Weather Service (NWS) station to these calculations as prescribed in the past. Hourly temperature readings were collected for 154 days at a research site in Farmington, AR between June 30 and December 25, 2008. These were converted to average daily temperatures by calculating the mean of the 24 hourly values, following the NWS reporting procedure. These data were compared to comparable data from the Owl Creek and Drake Field NWS stations, the two closest to the research site, located 5.7 and 9.9km away, respectively. Paired samples t-tests between the research site and each of the NWS stations show significant differences between the average daily temperature data collected at the research station, and both Owl Creek (2.0°C, p<0.001) and Drake Field (0.6°C, p<0.001). When applied to a simulated recovery effort, the further NWS station also proved to represent the better model for the recovery site. Using a published equation for estimating post-mortem interval using ADD and total body decomposition scores (Megyesi et al., 2005 [1]), the Drake Field data produced estimates of PMI more closely mirroring those of the research site than did Owl Creek. This demonstrates that instead of automatically choosing the nearest NWS station, care must be taken when choosing an NWS station for retroactively gathering temperature data for application of PMI estimation techniques using accumulated degree days to ensure the station adequately reflects temperature conditions at the recovery site. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Rostami, Javad; Chen, Jingming; Tse, Peter W
2017-02-07
Ultrasonic guided waves have been extensively applied for non-destructive testing of plate-like structures particularly pipes in past two decades. In this regard, if a structure has a simple geometry, obtained guided waves' signals are easy to explain. However, any small degree of complexity in the geometry such as contacting with other materials may cause an extra amount of complication in the interpretation of guided wave signals. The problem deepens if defects have irregular shapes such as natural corrosion. Signal processing techniques that have been proposed for guided wave signals' analysis are generally good for simple signals obtained in a highly controlled experimental environment. In fact, guided wave signals in a real situation such as the existence of natural corrosion in wall-covered pipes are much more complicated. Considering pipes in residential buildings that pass through concrete walls, in this paper we introduced Smooth Empirical Mode Decomposition (SEMD) to efficiently separate overlapped guided waves. As empirical mode decomposition (EMD) which is a good candidate for analyzing non-stationary signals, suffers from some shortcomings, wavelet transform was adopted in the sifting stage of EMD to improve its outcome in SEMD. However, selection of mother wavelet that suits best for our purpose plays an important role. Since in guided wave inspection, the incident waves are well known and are usually tone-burst signals, we tailored a complex tone-burst signal to be used as our mother wavelet. In the sifting stage of EMD, wavelet de-noising was applied to eliminate unwanted frequency components from each IMF. SEMD greatly enhances the performance of EMD in guided wave analysis for highly contaminated signals. In our experiment on concrete covered pipes with natural corrosion, this method not only separates the concrete wall indication clearly in time domain signal, a natural corrosion with complex geometry that was hidden and located inside the concrete section was successfully exposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westraadt, J.E., E-mail: johan.westraadt@nmmu.ac.za; Olivier, E.J.; Neethling, J.H.
2015-11-15
Spinodal decomposition (SD) is an important phenomenon in materials science and engineering. For example, it is considered to be responsible for the 475 °C embrittlement of stainless steels comprising the bcc (ferrite) or bct (martensite) phases. Structural characterization of the evolving minute nano-scale concentration fluctuations during SD in the Fe–Cr system is, however, a notable challenge, and has mainly been considered accessible via atom probe tomography (APT) and small-angle neutron scattering. The standard tool for nanostructure characterization, viz. transmission electron microscopy (TEM), has only been successfully applied to late stages of SD when embrittlement is already severe. However, we heremore » demonstrate that the structural evolution in the early stages of SD in binary Fe–Cr, and alloys based on the binary, are accessible via analytical scanning TEM. An Fe–36 wt% Cr alloy aged at 500 °C for 1, 10 and 100 h is investigated using an aberration-corrected microscope and it is found that highly coherent and interconnected Cr-rich regions develop. The wavelength of decomposition is rather insensitive to the sample thickness and it is quantified to 2, 3 and 6 nm after ageing for 1, 10 and 100 h, which is in reasonable agreement with prior APT analysis. The concentration amplitude is more sensitive to the sample thickness and acquisition parameters but the TEM analysis is in good agreement with APT analysis for the longest ageing time. These findings open up for combinatorial TEM studies where both local crystallography and chemistry is required. - Highlights: • STEM-EELS analysis was successfully applied to resolve early stage SD in Fe–Cr. • Compositional wavelength measured with STEM-EELS compares well to previous ATP studies. • Compositional amplitude measured with STEM-EELS is a function of experimental parameters. • STEM-EELS allows for combinatorial studies of SD using complementary techniques.« less
A benders decomposition approach to multiarea stochastic distributed utility planning
NASA Astrophysics Data System (ADS)
McCusker, Susan Ann
Until recently, small, modular generation and storage options---distributed resources (DRs)---have been installed principally in areas too remote for economic power grid connection and sensitive applications requiring backup capacity. Recent regulatory changes and DR advances, however, have lead utilities to reconsider the role of DRs. To a utility facing distribution capacity bottlenecks or uncertain load growth, DRs can be particularly valuable since they can be dispersed throughout the system and constructed relatively quickly. DR value is determined by comparing its costs to avoided central generation expenses (i.e., marginal costs) and distribution investments. This requires a comprehensive central and local planning and production model, since central system marginal costs result from system interactions over space and time. This dissertation develops and applies an iterative generalized Benders decomposition approach to coordinate models for optimal DR evaluation. Three coordinated models exchange investment, net power demand, and avoided cost information to minimize overall expansion costs. Local investment and production decisions are made by a local mixed integer linear program. Central system investment decisions are made by a LP, and production costs are estimated by a stochastic multi-area production costing model with Kirchhoff's Voltage and Current Law constraints. The nested decomposition is a new and unique method for distributed utility planning that partitions the variables twice to separate local and central investment and production variables, and provides upper and lower bounds on expected expansion costs. Kirchhoff's Voltage Law imposes nonlinear, nonconvex constraints that preclude use of LP if transmission capacity is available in a looped transmission system. This dissertation develops KVL constraint approximations that permit the nested decomposition to consider new transmission resources, while maintaining linearity in the three individual models. These constraints are presented as a heuristic for the given examples; future research will investigate conditions for convergence. A ten-year multi-area example demonstrates the decomposition approach and suggests the ability of DRs and new transmission to modify capacity additions and production costs by changing demand and power flows. Results demonstrate that DR and new transmission options may lead to greater capacity additions, but resulting production cost savings more than offset extra capacity costs.
The Roadmaker's algorithm for the discrete pulse transform.
Laurie, Dirk P
2011-02-01
The discrete pulse transform (DPT) is a decomposition of an observed signal into a sum of pulses, i.e., signals that are constant on a connected set and zero elsewhere. Originally developed for 1-D signal processing, the DPT has recently been generalized to more dimensions. Applications in image processing are currently being investigated. The time required to compute the DPT as originally defined via the successive application of LULU operators (members of a class of minimax filters studied by Rohwer) has been a severe drawback to its applicability. This paper introduces a fast method for obtaining such a decomposition, called the Roadmaker's algorithm because it involves filling pits and razing bumps. It acts selectively only on those features actually present in the signal, flattening them in order of increasing size by subtracing an appropriate positive or negative pulse, which is then appended to the decomposition. The implementation described here covers 1-D signal as well as two and 3-D image processing in a single framework. This is achieved by considering the signal or image as a function defined on a graph, with the geometry specified by the edges of the graph. Whenever a feature is flattened, nodes in the graph are merged, until eventually only one node remains. At that stage, a new set of edges for the same nodes as the graph, forming a tree structure, defines the obtained decomposition. The Roadmaker's algorithm is shown to be equivalent to the DPT in the sense of obtaining the same decomposition. However, its simpler operators are not in general equivalent to the LULU operators in situations where those operators are not applied successively. A by-product of the Roadmaker's algorithm is that it yields a proof of the so-called Highlight Conjecture, stated as an open problem in 2006. We pay particular attention to algorithmic details and complexity, including a demonstration that in the 1-D case, and also in the case of a complete graph, the Roadmaker's algorithm has optimal complexity: it runs in time O(m), where m is the number of arcs in the graph.
Importance of vegetation dynamics for future terrestrial carbon cycling
NASA Astrophysics Data System (ADS)
Ahlström, Anders; Xia, Jianyang; Arneth, Almut; Luo, Yiqi; Smith, Benjamin
2015-05-01
Terrestrial ecosystems currently sequester about one third of anthropogenic CO2 emissions each year, an important ecosystem service that dampens climate change. The future fate of this net uptake of CO2 by land based ecosystems is highly uncertain. Most ecosystem models used to predict the future terrestrial carbon cycle share a common architecture, whereby carbon that enters the system as net primary production (NPP) is distributed to plant compartments, transferred to litter and soil through vegetation turnover and then re-emitted to the atmosphere in conjunction with soil decomposition. However, while all models represent the processes of NPP and soil decomposition, they vary greatly in their representations of vegetation turnover and the associated processes governing mortality, disturbance and biome shifts. Here we used a detailed second generation dynamic global vegetation model with advanced representation of vegetation growth and mortality, and the associated turnover. We apply an emulator that describes the carbon flows and pools exactly as in simulations with the full model. The emulator simulates ecosystem dynamics in response to 13 different climate or Earth system model simulations from the Coupled Model Intercomparison Project Phase 5 ensemble under RCP8.5 radiative forcing. By exchanging carbon cycle processes between these 13 simulations we quantified the relative roles of three main driving processes of the carbon cycle; (I) NPP, (II) vegetation dynamics and turnover and (III) soil decomposition, in terms of their contribution to future carbon (C) uptake uncertainties among the ensemble of climate change scenarios. We found that NPP, vegetation turnover (including structural shifts, wild fires and mortality) and soil decomposition rates explained 49%, 17% and 33%, respectively, of uncertainties in modelled global C-uptake. Uncertainty due to vegetation turnover was further partitioned into stand-clearing disturbances (16%), wild fires (0%), stand dynamics (7%), reproduction (10%) and biome shifts (67%) globally. We conclude that while NPP and soil decomposition rates jointly account for 83% of future climate induced C-uptake uncertainties, vegetation turnover and structure, dominated by biome shifts, represent a significant fraction globally and regionally (tropical forests: 40%), strongly motivating their representation and analysis in future C-cycle studies.
McLaren, Jennie R; Buckeridge, Kate M; van de Weg, Martine J; Shaver, Gaius R; Schimel, Joshua P; Gough, Laura
2017-05-01
Rapid arctic vegetation change as a result of global warming includes an increase in the cover and biomass of deciduous shrubs. Increases in shrub abundance will result in a proportional increase of shrub litter in the litter community, potentially affecting carbon turnover rates in arctic ecosystems. We investigated the effects of leaf and root litter of a deciduous shrub, Betula nana, on decomposition, by examining species-specific decomposition patterns, as well as effects of Betula litter on the decomposition of other species. We conducted a 2-yr decomposition experiment in moist acidic tundra in northern Alaska, where we decomposed three tundra species (Vaccinium vitis-idaea, Rhododendron palustre, and Eriophorum vaginatum) alone and in combination with Betula litter. Decomposition patterns for leaf and root litter were determined using three different measures of decomposition (mass loss, respiration, extracellular enzyme activity). We report faster decomposition of Betula leaf litter compared to other species, with support for species differences coming from all three measures of decomposition. Mixing effects were less consistent among the measures, with negative mixing effects shown only for mass loss. In contrast, there were few species differences or mixing effects for root decomposition. Overall, we attribute longer-term litter mass loss patterns to patterns created by early decomposition processes in the first winter. We note numerous differences for species patterns between leaf and root decomposition, indicating that conclusions from leaf litter experiments should not be extrapolated to below-ground decomposition. The high decomposition rates of Betula leaf litter aboveground, and relatively similar decomposition rates of multiple species below, suggest a potential for increases in turnover in the fast-decomposing carbon pool of leaves and fine roots as the dominance of deciduous shrubs in the Arctic increases, but this outcome may be tempered by negative litter mixing effects during the early stages of encroachment. © 2017 by the Ecological Society of America.
Unraveling the physical meaning of the Jaffe-Manohar decomposition of the nucleon spin
NASA Astrophysics Data System (ADS)
Wakamatsu, M.
2016-09-01
A general consensus now is that there are two physically inequivalent complete decompositions of the nucleon spin, i.e. the decomposition of the canonical type and that of mechanical type. The well-known Jaffe-Manohar decomposition is of the former type. Unfortunately, there is a wide-spread misbelief that this decomposition matches the partonic picture, which states that motion of quarks in the nucleon is approximately free. In the present monograph, we reveal that this understanding is not necessarily correct and that the Jaffe-Manohar decomposition is not such a decomposition, which natively reflects the intrinsic (or static) orbital angular momentum structure of the nucleon.
NASA Astrophysics Data System (ADS)
Steiner, S. M.; Wood, J. H.
2015-12-01
As decomposition rates are affected by climate change, understanding crucial soil interactions that affect plant growth and decomposition becomes a vital part of contributing to the students' knowledge base. The Global Decomposition Project (GDP) is designed to introduce and educate students about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. The Interactive Model of Leaf Decomposition (IMOLD) utilizes animations and modeling to learn about the carbon cycle, leaf anatomy, and the role of microbes in decomposition. Paired together, IMOLD teaches the background information and allows simulation of numerous scenarios, and the GDP is a data collection protocol that allows students to gather usable measurements of decomposition in the field. Our presentation will detail how the GDP protocol works, how to obtain or make the materials needed, and how results will be shared. We will also highlight learning objectives from the three animations of IMOLD, and demonstrate how students can experiment with different climates and litter types using the interactive model to explore a variety of decomposition scenarios. The GDP demonstrates how scientific methods can be extended to educate broader audiences, and data collected by students can provide new insight into global patterns of soil decomposition. Using IMOLD, students will gain a better understanding of carbon cycling in the context of litter decomposition, as well as learn to pose questions they can answer with an authentic computer model. Using the GDP protocols and IMOLD provide a pathway for scientists and educators to interact and reach meaningful education and research goals.
A data-driven decomposition approach to model aerodynamic forces on flapping airfoils
NASA Astrophysics Data System (ADS)
Raiola, Marco; Discetti, Stefano; Ianiro, Andrea
2017-11-01
In this work, we exploit a data-driven decomposition of experimental data from a flapping airfoil experiment with the aim of isolating the main contributions to the aerodynamic force and obtaining a phenomenological model. Experiments are carried out on a NACA 0012 airfoil in forward flight with both heaving and pitching motion. Velocity measurements of the near field are carried out with Planar PIV while force measurements are performed with a load cell. The phase-averaged velocity fields are transformed into the wing-fixed reference frame, allowing for a description of the field in a domain with fixed boundaries. The decomposition of the flow field is performed by means of the POD applied on the velocity fluctuations and then extended to the phase-averaged force data by means of the Extended POD approach. This choice is justified by the simple consideration that aerodynamic forces determine the largest contributions to the energetic balance in the flow field. Only the first 6 modes have a relevant contribution to the force. A clear relationship can be drawn between the force and the flow field modes. Moreover, the force modes are closely related (yet slightly different) to the contributions of the classic potential models in literature, allowing for their correction. This work has been supported by the Spanish MINECO under Grant TRA2013-41103-P.
Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)
2002-01-01
When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.
Dai, Wei; Chen, Xiaolin; Wang, Xuewen; Xu, Zimu; Gao, Xueyan; Jiang, Chaosheng; Deng, Ruining; Han, Guomin
2018-01-01
The molecular mechanism underlying the elimination of algal cells by fungal mycelia has not been fully understood. Here, we applied transcriptomic analysis to investigate the gene expression and regulation at time courses of Trametes versicolor F21a during the algicidal process. The obtained results showed that a total of 193, 332, 545, and 742 differentially expressed genes were identified at 0, 6, 12, and 30 h during the algicidal process, respectively. The gene ontology terms were enriched into glucan 1,4-α-glucosidase activity, hydrolase activity, lipase activity, and endopeptidase activity. The KEGG pathways were enriched in degradation and metabolism pathways including Glycolysis/Gluconeogenesis, Pyruvate metabolism, the Biosynthesis of amino acids, etc. The total expression levels of all Carbohydrate-Active enZYmes (CAZyme) genes for the saccharide metabolism were increased by two folds relative to the control. AA5, GH18, GH5, GH79, GH128, and PL8 were the top six significantly up-regulated modules among 43 detected CAZyme modules. Four available homologous decomposition enzymes of other species could partially inhibit the growth of algal cells. The facts suggest that the algicidal mode of T. versicolor F21a might be associated with decomposition enzymes and several metabolic pathways. The obtained results provide a new candidate way to control algal bloom by application of decomposition enzymes in the future.
Non-equilibrium theory of arrested spinodal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olais-Govea, José Manuel; López-Flores, Leticia; Medina-Noyola, Magdaleno
The non-equilibrium self-consistent generalized Langevin equation theory of irreversible relaxation [P. E. Ramŕez-González and M. Medina-Noyola, Phys. Rev. E 82, 061503 (2010); 82, 061504 (2010)] is applied to the description of the non-equilibrium processes involved in the spinodal decomposition of suddenly and deeply quenched simple liquids. For model liquids with hard-sphere plus attractive (Yukawa or square well) pair potential, the theory predicts that the spinodal curve, besides being the threshold of the thermodynamic stability of homogeneous states, is also the borderline between the regions of ergodic and non-ergodic homogeneous states. It also predicts that the high-density liquid-glass transition line, whosemore » high-temperature limit corresponds to the well-known hard-sphere glass transition, at lower temperature intersects the spinodal curve and continues inside the spinodal region as a glass-glass transition line. Within the region bounded from below by this low-temperature glass-glass transition and from above by the spinodal dynamic arrest line, we can recognize two distinct domains with qualitatively different temperature dependence of various physical properties. We interpret these two domains as corresponding to full gas-liquid phase separation conditions and to the formation of physical gels by arrested spinodal decomposition. The resulting theoretical scenario is consistent with the corresponding experimental observations in a specific colloidal model system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopes, E.S.N.; Cremasco, A.; Afonso, C.R.M.
Aging heat treatment can be a good way to optimize mechanical properties, changing the microstructure, and hence, the mechanical behavior of Ti alloys. The effects of aging heat treatments on {beta}-type Ti-30Nb alloy were investigated to evaluate the kinetics of {alpha}'' {yields} {alpha} + {beta} transformation. The results obtained from differential scanning calorimetry and high-temperature X-ray diffraction experiments indicated the complete decomposition of orthorhombic {alpha}'' phase at close to 300 deg. C, followed by {alpha} phase precipitation at 470 deg. C. The aging heat treatments also enabled us to observe a transformation sequence {alpha}'' {yields} {beta} + {omega} {yields} {beta}more » + {omega} + {alpha}, indicating martensite decomposition and {omega} phase precipitation at 260 deg. C after 2 h, followed by {alpha} phase nucleation after heating at 400 deg. C for 1 h. The elastic modulus and Vickers hardness of Ti-30Nb alloy were found to be very sensitive to the microstructural changes caused by heat treatment. - Highlights: {yields} DSC and XRD shed light on the {alpha}'' decomposition and nucleation of {omega} and {alpha} phases. {yields} Aging allows for {alpha}''{yields}{beta} transformation and nucleation of {omega} dispersed in the {beta} matrix. {yields} During aging, {alpha}'' interplanar distances are reduced to enable {beta} phase nucleation. {yields} Mechanical behavior is dependent on the microstructure and the phases in the alloy. {yields} It is not possible to obtain high strength and low elastic modulus by applying aging.« less
Meyer, Jan M; Baskaran, Praveen; Quast, Christian; Susoy, Vladislav; Rödelsperger, Christian; Glöckner, Frank O; Sommer, Ralf J
2017-04-01
Insects and nematodes represent the most species-rich animal taxa and they occur together in a variety of associations. Necromenic nematodes of the genus Pristionchus are found on scarab beetles with more than 30 species known from worldwide samplings. However, little is known about the dynamics and succession of nematodes and bacteria during the decomposition of beetle carcasses. Here, we study nematode and bacterial succession of the decomposing rhinoceros beetle Oryctes borbonicus on La Réunion Island. We show that Pristionchus pacificus exits the arrested dauer stage seven days after the beetles´ deaths. Surprisingly, new dauers are seen after 11 days, suggesting that some worms return to the dauer stage after one reproductive cycle. We used high-throughput sequencing of the 16S rRNA genes of decaying beetles, beetle guts and nematodes to study bacterial communities in comparison to soil. We find that soil environments have the most diverse bacterial communities. The bacterial community of living and decaying beetles are more stable but one single bacterial family dominates the microbiome of decaying beetles. In contrast, the microbiome of nematodes is relatively similar even across different families. This study represents the first characterization of the dynamics of nematode-bacterial interactions during the decomposition of insects. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zhang, Shangbin; Lu, Siliang; He, Qingbo; Kong, Fanrang
2016-09-01
For rotating machines, the defective faults of bearings generally are represented as periodic transient impulses in acquired signals. The extraction of transient features from signals has been a key issue for fault diagnosis. However, the background noise reduces identification performance of periodic faults in practice. This paper proposes a time-varying singular value decomposition (TSVD) method to enhance the identification of periodic faults. The proposed method is inspired by the sliding window method. By applying singular value decomposition (SVD) to the signal under a sliding window, we can obtain a time-varying singular value matrix (TSVM). Each column in the TSVM is occupied by the singular values of the corresponding sliding window, and each row represents the intrinsic structure of the raw signal, namely time-singular-value-sequence (TSVS). Theoretical and experimental analyses show that the frequency of TSVS is exactly twice that of the corresponding intrinsic structure. Moreover, the signal-to-noise ratio (SNR) of TSVS is improved significantly in comparison with the raw signal. The proposed method takes advantages of the TSVS in noise suppression and feature extraction to enhance fault frequency for diagnosis. The effectiveness of the TSVD is verified by means of simulation studies and applications to diagnosis of bearing faults. Results indicate that the proposed method is superior to traditional methods for bearing fault diagnosis.
LP and NLP decomposition without a master problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, D.; Lan, B.
We describe a new algorithm for decomposition of linear programs and a class of convex nonlinear programs, together with theoretical properties and some test results. Its most striking feature is the absence of a master problem; the subproblems pass primal and dual proposals directly to one another. The algorithm is defined for multi-stage LPs or NLPs, in which the constraints link the current stage`s variables to earlier stages` variables. This problem class is general enough to include many problem structures that do not immediately suggest stages, such as block diagonal problems. The basic algorithmis derived for two-stage problems and extendedmore » to more than two stages through nested decomposition. The main theoretical result assures convergence, to within any preset tolerance of the optimal value, in a finite number of iterations. This asymptotic convergence result contrasts with the results of limited tests on LPs, in which the optimal solution is apparently found exactly, i.e., to machine accuracy, in a small number of iterations. The tests further suggest that for LPs, the new algorithm is faster than the simplex method applied to the whole problem, as long as the stages are linked loosely; that the speedup over the simpex method improves as the number of stages increases; and that the algorithm is more reliable than nested Dantzig-Wolfe or Benders` methods in its improvement over the simplex method.« less
Taki, Hirofumi; Nagatani, Yoshiki; Matsukawa, Mami; Kanai, Hiroshi; Izumi, Shin-Ichi
2017-10-01
Ultrasound signals that pass through cancellous bone may be considered to consist of two longitudinal waves, which are called fast and slow waves. Accurate decomposition of these fast and slow waves is considered to be highly beneficial in determination of the characteristics of cancellous bone. In the present study, a fast decomposition method using a wave transfer function with a phase rotation parameter was applied to received signals that have passed through bovine bone specimens with various bone volume to total volume (BV/TV) ratios in a simulation study, where the elastic finite-difference time-domain method is used and the ultrasound wave propagated parallel to the bone axes. The proposed method succeeded to decompose both fast and slow waves accurately; the normalized residual intensity was less than -19.5 dB when the specimen thickness ranged from 4 to 7 mm and the BV/TV value ranged from 0.144 to 0.226. There was a strong relationship between the phase rotation value and the BV/TV value. The ratio of the peak envelope amplitude of the decomposed fast wave to that of the slow wave increased monotonically with increasing BV/TV ratio, indicating the high performance of the proposed method in estimation of the BV/TV value in cancellous bone.
The decomposition of fine and coarse roots: their global patterns and controlling factors
Zhang, Xinyue; Wang, Wei
2015-01-01
Fine root decomposition represents a large carbon (C) cost to plants, and serves as a potential soil C source, as well as a substantial proportion of net primary productivity. Coarse roots differ markedly from fine roots in morphology, nutrient concentrations, functions, and decomposition mechanisms. Still poorly understood is whether a consistent global pattern exists between the decomposition of fine (<2 mm root diameter) and coarse (≥2 mm) roots. A comprehensive terrestrial root decomposition dataset, including 530 observations from 71 sampling sites, was thus used to compare global patterns of decomposition of fine and coarse roots. Fine roots decomposed significantly faster than coarse roots in middle latitude areas, but their decomposition in low latitude regions was not significantly different from that of coarse roots. Coarse root decomposition showed more dependence on climate, especially mean annual temperature (MAT), than did fine roots. Initial litter lignin content was the most important predictor of fine root decomposition, while lignin to nitrogen ratios, MAT, and mean annual precipitation were the most important predictors of coarse root decomposition. Our study emphasizes the necessity of separating fine roots and coarse roots when predicting the response of belowground C release to future climate changes. PMID:25942391
Ficken, Cari D; Wright, Justin P
2017-01-01
Litter quality and soil environmental conditions are well-studied drivers influencing decomposition rates, but the role played by disturbance legacy, such as fire history, in mediating these drivers is not well understood. Fire history may impact decomposition directly, through changes in soil conditions that impact microbial function, or indirectly, through shifts in plant community composition and litter chemistry. Here, we compared early-stage decomposition rates across longleaf pine forest blocks managed with varying fire frequencies (annual burns, triennial burns, fire-suppression). Using a reciprocal transplant design, we examined how litter chemistry and soil characteristics independently and jointly influenced litter decomposition. We found that both litter chemistry and soil environmental conditions influenced decomposition rates, but only the former was affected by historical fire frequency. Litter from annually burned sites had higher nitrogen content than litter from triennially burned and fire suppression sites, but this was correlated with only a modest increase in decomposition rates. Soil environmental conditions had a larger impact on decomposition than litter chemistry. Across the landscape, decomposition differed more along soil moisture gradients than across fire management regimes. These findings suggest that fire frequency has a limited effect on litter decomposition in this ecosystem, and encourage extending current decomposition frameworks into disturbed systems. However, litter from different species lost different masses due to fire, suggesting that fire may impact decomposition through the preferential combustion of some litter types. Overall, our findings also emphasize the important role of spatial variability in soil environmental conditions, which may be tied to fire frequency across large spatial scales, in driving decomposition rates in this system.
a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data
NASA Astrophysics Data System (ADS)
Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.
2018-04-01
Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.
A density functional theory study of the decomposition mechanism of nitroglycerin.
Pei, Liguan; Dong, Kehai; Tang, Yanhui; Zhang, Bo; Yu, Chang; Li, Wenzuo
2017-08-21
The detailed decomposition mechanism of nitroglycerin (NG) in the gas phase was studied by examining reaction pathways using density functional theory (DFT) and canonical variational transition state theory combined with a small-curvature tunneling correction (CVT/SCT). The mechanism of NG autocatalytic decomposition was investigated at the B3LYP/6-31G(d,p) level of theory. Five possible decomposition pathways involving NG were identified and the rate constants for the pathways at temperatures ranging from 200 to 1000 K were calculated using CVT/SCT. There was found to be a lower energy barrier to the β-H abstraction reaction than to the α-H abstraction reaction during the initial step in the autocatalytic decomposition of NG. The decomposition pathways for CHOCOCHONO 2 (a product obtained following the abstraction of three H atoms from NG by NO 2 ) include O-NO 2 cleavage or isomer production, meaning that the autocatalytic decomposition of NG has two reaction pathways, both of which are exothermic. The rate constants for these two reaction pathways are greater than the rate constants for the three pathways corresponding to unimolecular NG decomposition. The overall process of NG decomposition can be divided into two stages based on the NO 2 concentration, which affects the decomposition products and reactions. In the first stage, the reaction pathway corresponding to O-NO 2 cleavage is the main pathway, but the rates of the two autocatalytic decomposition pathways increase with increasing NO 2 concentration. However, when a threshold NO 2 concentration is reached, the NG decomposition process enters its second stage, with the two pathways for NG autocatalytic decomposition becoming the main and secondary reaction pathways.
Wright, Justin P.
2017-01-01
Litter quality and soil environmental conditions are well-studied drivers influencing decomposition rates, but the role played by disturbance legacy, such as fire history, in mediating these drivers is not well understood. Fire history may impact decomposition directly, through changes in soil conditions that impact microbial function, or indirectly, through shifts in plant community composition and litter chemistry. Here, we compared early-stage decomposition rates across longleaf pine forest blocks managed with varying fire frequencies (annual burns, triennial burns, fire-suppression). Using a reciprocal transplant design, we examined how litter chemistry and soil characteristics independently and jointly influenced litter decomposition. We found that both litter chemistry and soil environmental conditions influenced decomposition rates, but only the former was affected by historical fire frequency. Litter from annually burned sites had higher nitrogen content than litter from triennially burned and fire suppression sites, but this was correlated with only a modest increase in decomposition rates. Soil environmental conditions had a larger impact on decomposition than litter chemistry. Across the landscape, decomposition differed more along soil moisture gradients than across fire management regimes. These findings suggest that fire frequency has a limited effect on litter decomposition in this ecosystem, and encourage extending current decomposition frameworks into disturbed systems. However, litter from different species lost different masses due to fire, suggesting that fire may impact decomposition through the preferential combustion of some litter types. Overall, our findings also emphasize the important role of spatial variability in soil environmental conditions, which may be tied to fire frequency across large spatial scales, in driving decomposition rates in this system. PMID:29023560
Dictionary-Based Tensor Canonical Polyadic Decomposition
NASA Astrophysics Data System (ADS)
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
Comparison of decomposition rates between autopsied and non-autopsied human remains.
Bates, Lennon N; Wescott, Daniel J
2016-04-01
Penetrating trauma has been cited as a significant factor in the rate of decomposition. Therefore, penetrating trauma may have an effect on estimations of time-since-death in medicolegal investigations and on research examining decomposition rates and processes when autopsied human bodies are used. The goal of this study was to determine if there are differences in the rate of decomposition between autopsied and non-autopsied human remains in the same environment. The purpose is to shed light on how large incisions, such as those from a thorocoabdominal autopsy, effect time-since-death estimations and research on the rate of decomposition that use both autopsied and non-autopsied human remains. In this study, 59 non-autopsied and 24 autopsied bodies were studied. The number of accumulated degree days required to reach each decomposition stage was then compared between autopsied and non-autopsied remains. Additionally, both types of bodies were examined for seasonal differences in decomposition rates. As temperature affects the rate of decomposition, this study also compared the internal body temperatures of autopsied and non-autopsied remains to see if differences between the two may be leading to differential decomposition. For this portion of this study, eight non-autopsied and five autopsied bodies were investigated. Internal temperature was collected once a day for two weeks. The results showed that differences in the decomposition rate between autopsied and non-autopsied remains was not statistically significant, though the average ADD needed to reach each stage of decomposition was slightly lower for autopsied bodies than non-autopsied bodies. There was also no significant difference between autopsied and non-autopsied bodies in the rate of decomposition by season or in internal temperature. Therefore, this study suggests that it is unnecessary to separate autopsied and non-autopsied remains when studying gross stages of human decomposition in Central Texas and that penetrating trauma may not be a significant factor in the overall rate of decomposition. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, T. W.; Ting, C.F.; Qu, Jun
2007-01-01
Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish differentmore » states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.« less
NASA Astrophysics Data System (ADS)
Shahzad, Syed Jawad Hussain; Kumar, Ronald Ravinesh; Ali, Sajid; Ameer, Saba
2016-09-01
The interdependence of Greece and other European stock markets and the subsequent portfolio implications are examined in wavelet and variational mode decomposition domain. In applying the decomposition techniques, we analyze the structural properties of data and distinguish between short and long term dynamics of stock market returns. First, the GARCH-type models are fitted to obtain the standardized residuals. Next, different copula functions are evaluated, and based on the conventional information criteria and time varying parameter, Joe-Clayton copula is chosen to model the tail dependence between the stock markets. The short-run lower tail dependence time paths show a sudden increase in comovement during the global financial crises. The results of the long-run dependence suggest that European stock markets have higher interdependence with Greece stock market. Individual country's Value at Risk (VaR) separates the countries into two distinct groups. Finally, the two-asset portfolio VaR measures provide potential markets for Greece stock market investment diversification.
On the Processing of Martensitic Steels in Continuous Galvanizing Lines: Part II
NASA Astrophysics Data System (ADS)
Song, Taejin; Kwak, Jaihyun; de Cooman, B. C.
2012-01-01
The conventional continuous hot-dip galvanizing (GI) and galvannealing (GA) processes can be applied to untransformed austenite to produce Zn and Zn-alloy coated low-carbon ultra-high-strength martensitic steel provided specific alloying additions are made. The most suitable austenite decomposition behavior results from the combined addition of boron, Cr, and Mo, which results in a pronounced transformation bay during isothermal transformation. The occurrence of this transformation bay implies a considerable retardation of the austenite decomposition in the temperature range below the bay, which is close to the stages in the continuous galvanizing line (CGL) thermal cycle related to the GI and GA processes. After the GI and GA processes, a small amount of granular bainite, which consists of bainitic ferrite and discrete islands of martensite/austenite (M/A) constituents embedded in martensite matrix, is present in the microstructure. The ultimate tensile strength (UTS) of the steel after the GI and GA cycle was over 1300 MPa, and the stress-strain curve was continuous without any yielding phenomena.
Normal forms of Hopf-zero singularity
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh
2015-01-01
The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.
ADM Analysis of gravity models within the framework of bimetric variational formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golovnev, Alexey; Karčiauskas, Mindaugas; Nyrhinen, Hannu J., E-mail: agolovnev@yandex.ru, E-mail: mindaugas.karciauskas@helsinki.fi, E-mail: hannu.nyrhinen@helsinki.fi
2015-05-01
Bimetric variational formalism was recently employed to construct novel bimetric gravity models. In these models an affine connection is generated by an additional tensor field which is independent of the physical metric. In this work we demonstrate how the ADM decomposition can be applied to study such models and provide some technical intermediate details. Using ADM decomposition we are able to prove that a linear model is unstable as has previously been indicated by perturbative analysis. Moreover, we show that it is also very difficult if not impossible to construct a non-linear model which is ghost-free within the framework ofmore » bimetric variational formalism. However, we demonstrate that viable models are possible along similar lines of thought. To this end, we consider a set up in which the affine connection is a variation of the Levi-Civita one. As a proof of principle we construct a gravity model with a massless scalar field obtained this way.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Yao, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu; Song, Jeong-Hoon, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu
2014-08-07
Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifiesmore » the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.« less
Achieving composition-controlled Cu2ZnSnS4 films by sulfur-free annealing process
NASA Astrophysics Data System (ADS)
Jiang, Hailong; Wei, Xiaoqing; Huang, Yongliang; Wang, Xian; Han, Anjun; Liu, Xiaohui; Liu, Zhengxin; Meng, Fanying
2017-06-01
Cu2ZnSnS4 (CZTS) films were firstly prepared by the nonvacuum spin-coating method, and then annealed at 550 °C in N2 atmosphere. A graphite box was used to inhibit the volatilization of gaseous SnS and S2 to suppress the CZTS decomposition and generation of MoS2 during annealing. The sulfur supplementation carried out in a conventional annealing process was not applied in this work. It was found that Sn loss was overcome and the compositions of postannealed films were close to that of precursor solution. Thus, by this method, the compositions of CZTS films can be controlled by adjusting the elemental ratios of the precursor solution. Besides, the increase in inert atmosphere pressure could further minimize the Sn loss and improve the crystallinity of CZTS films. Furthermore, the resistive MoS2 layer between the CZTS film and the Mo layer was suppressed because sulfur was not used and CZTS decomposition was suppressed.
Domain decomposition and matching for time-domain analysis of motions of ships advancing in head sea
NASA Astrophysics Data System (ADS)
Tang, Kai; Zhu, Ren-chuan; Miao, Guo-ping; Fan, Ju
2014-08-01
A domain decomposition and matching method in the time-domain is outlined for simulating the motions of ships advancing in waves. The flow field is decomposed into inner and outer domains by an imaginary control surface, and the Rankine source method is applied to the inner domain while the transient Green function method is used in the outer domain. Two initial boundary value problems are matched on the control surface. The corresponding numerical codes are developed, and the added masses, wave exciting forces and ship motions advancing in head sea for Series 60 ship and S175 containership, are presented and verified. A good agreement has been obtained when the numerical results are compared with the experimental data and other references. It shows that the present method is more efficient because of the panel discretization only in the inner domain during the numerical calculation, and good numerical stability is proved to avoid divergence problem regarding ships with flare.
Evolutionary and Developmental Modules
Lacquaniti, Francesco; Ivanenko, Yuri P.; d’Avella, Andrea; Zelik, Karl E.; Zago, Myrka
2013-01-01
The identification of biological modules at the systems level often follows top-down decomposition of a task goal, or bottom-up decomposition of multidimensional data arrays into basic elements or patterns representing shared features. These approaches traditionally have been applied to mature, fully developed systems. Here we review some results from two other perspectives on modularity, namely the developmental and evolutionary perspective. There is growing evidence that modular units of development were highly preserved and recombined during evolution. We first consider a few examples of modules well identifiable from morphology. Next we consider the more difficult issue of identifying functional developmental modules. We dwell especially on modular control of locomotion to argue that the building blocks used to construct different locomotor behaviors are similar across several animal species, presumably related to ancestral neural networks of command. A recurrent theme from comparative studies is that the developmental addition of new premotor modules underlies the postnatal acquisition and refinement of several different motor behaviors in vertebrates. PMID:23730285
Evolutionary and developmental modules.
Lacquaniti, Francesco; Ivanenko, Yuri P; d'Avella, Andrea; Zelik, Karl E; Zago, Myrka
2013-01-01
The identification of biological modules at the systems level often follows top-down decomposition of a task goal, or bottom-up decomposition of multidimensional data arrays into basic elements or patterns representing shared features. These approaches traditionally have been applied to mature, fully developed systems. Here we review some results from two other perspectives on modularity, namely the developmental and evolutionary perspective. There is growing evidence that modular units of development were highly preserved and recombined during evolution. We first consider a few examples of modules well identifiable from morphology. Next we consider the more difficult issue of identifying functional developmental modules. We dwell especially on modular control of locomotion to argue that the building blocks used to construct different locomotor behaviors are similar across several animal species, presumably related to ancestral neural networks of command. A recurrent theme from comparative studies is that the developmental addition of new premotor modules underlies the postnatal acquisition and refinement of several different motor behaviors in vertebrates.
NASA Astrophysics Data System (ADS)
Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan
2014-07-01
Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.
Study on the relevance of some of the description methods for plateau-honed surfaces
NASA Astrophysics Data System (ADS)
Yousfi, M.; Mezghani, S.; Demirci, I.; El Mansori, M.
2014-01-01
Much work has been undertaken in recent years into the determination of a complete parametric description of plateau-honed surfaces with the intention of making a link between the process conditions, the surface topography and the required functional performances. Different advanced techniques (plateau/valleys decomposition using the normalized Abbott-Firestone curve or morphological operators, multiscale decomposition using continuous wavelets transform, etc) were proposed and applied in different studies. This paper re-examines the current state of developments and addresses a discussion on the relevance of the different proposed parameters and characterization methods for plateau-honed surfaces by considering the control loop manufacturing-characterization-function. The relevance of appropriate characterization is demonstrated through two experimental studies. They consider the effect of the most plateau honing process variables (the abrasive grit size and abrasive indentation velocity in finish-honing and the plateau-honing stage duration and pressure) on cylinder liner surface textures and hydrodynamic friction of the ring-pack system.
NASA Astrophysics Data System (ADS)
Bagherzadeh, Seyed Amin; Asadi, Davood
2017-05-01
In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.
NASA Astrophysics Data System (ADS)
Xiong, Hui; Shang, Pengjian; Bian, Songhan
2017-05-01
In this paper, we apply the empirical mode decomposition (EMD) method to the recurrence plot (RP) and recurrence quantification analysis (RQA), to evaluate the frequency- and time-evolving dynamics of the traffic flow. Based on the cumulative intrinsic mode functions extracted by the EMD, the frequency-evolving RP regarding different oscillation of modes suggests that apparent dynamics of the data considered are mainly dominated by its components of medium- and low-frequencies while severely affected by fast oscillated noises contained in the signal. Noises are then eliminated to analyze the intrinsic dynamics and consequently, the denoised time-evolving RQA diversely characterizes the properties of the signal and marks crucial points more accurately where white bands in the RP occur, whereas a strongly qualitative agreement exists between all the non-denoised RQA measures. Generally, the EMD combining with the recurrence analysis sheds more reliable, abundant and inherent lights into the traffic flow, which is meaningful to the empirical analysis of complex systems.