Sample records for random decomposable problems

  1. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  2. An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification

    PubMed Central

    Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos

    2015-01-01

    This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015

  3. A connectionist model for diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Peng, Yun; Reggia, James A.

    1989-01-01

    A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.

  4. Effect of randomness on multi-frequency aeroelastic responses resolved by Unsteady Adaptive Stochastic Finite Elements

    NASA Astrophysics Data System (ADS)

    Witteveen, Jeroen A. S.; Bijl, Hester

    2009-10-01

    The Unsteady Adaptive Stochastic Finite Elements (UASFE) method resolves the effect of randomness in numerical simulations of single-mode aeroelastic responses with a constant accuracy in time for a constant number of samples. In this paper, the UASFE framework is extended to multi-frequency responses and continuous structures by employing a wavelet decomposition pre-processing step to decompose the sampled multi-frequency signals into single-frequency components. The effect of the randomness on the multi-frequency response is then obtained by summing the results of the UASFE interpolation at constant phase for the different frequency components. Results for multi-frequency responses and continuous structures show a three orders of magnitude reduction of computational costs compared to crude Monte Carlo simulations in a harmonically forced oscillator, a flutter panel problem, and the three-dimensional transonic AGARD 445.6 wing aeroelastic benchmark subject to random fields and random parameters with various probability distributions.

  5. Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.

    PubMed

    Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal

    2011-06-01

    This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.

  6. Decomposing of Socioeconomic Inequality in Mental Health: A Cross-Sectional Study into Female-Headed Households.

    PubMed

    Veisani, Yousef; Delpisheh, Ali

    2015-01-01

    Connection between socioeconomic statuses and mental health has been reported already. Accordingly, mental health asymmetrically is distributed in society; therefore, people with disadvantaged condition suffer from inconsistent burden of mental disorders. In this study, we aimed to understand the determinants of socioeconomic inequality of mental health in the female-headed households and decomposed contributions of socioeconomic determinants in mental health. In this cross-sectional study, 787 female-headed households were enrolled using systematic random sampling in 2014. Data were taken from the household assets survey and a self-administered 28 item General Health Questionnaire (GHQ-28) as a screening tool for detection of possible cases of mental disorders. Inequality was measured by concentration index (CI) and as decomposing contribution in inequality. All analyses were performed by standard statistical software Stata 11.2. The overall CI for mental health in the female-headed households was -0.049 (95% CI: -0.072, 0.025). The highly positive contributors for inequality in mental health in the female-headed households were age (34%) and poor household economic status (22%). Socioeconomic inequalities exist in mental health into female-headed households and mental health problems more prevalent in women with lower socioeconomic status.

  7. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  8. Solution of the determinantal assignment problem using the Grassmann matrices

    NASA Astrophysics Data System (ADS)

    Karcanias, Nicos; Leventides, John

    2016-02-01

    The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.

  9. An algorithm of adaptive scale object tracking in occlusion

    NASA Astrophysics Data System (ADS)

    Zhao, Congmei

    2017-05-01

    Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.

  10. The complexity of divisibility.

    PubMed

    Bausch, Johannes; Cubitt, Toby

    2016-09-01

    We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.

  11. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  12. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  13. A Kohonen-like decomposition method for the Euclidean traveling salesman problem-KNIES/spl I.bar/DECOMPOSE.

    PubMed

    Aras, N; Altinel, I K; Oommen, J

    2003-01-01

    In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.

  14. Decomposing intuitive components in a conceptual problem solving task.

    PubMed

    Reber, Rolf; Ruch-Monachon, Marie-Antoinette; Perrig, Walter J

    2007-06-01

    Research into intuitive problem solving has shown that objective closeness of participants' hypotheses were closer to the accurate solution than their subjective ratings of closeness. After separating conceptually intuitive problem solving from the solutions of rational incremental tasks and of sudden insight tasks, we replicated this finding by using more precise measures in a conceptual problem-solving task. In a second study, we distinguished performance level, processing style, implicit knowledge and subjective feeling of closeness to the solution within the problem-solving task and examined the relationships of these different components with measures of intelligence and personality. Verbal intelligence correlated with performance level in problem solving, but not with processing style and implicit knowledge. Faith in intuition, openness to experience, and conscientiousness correlated with processing style, but not with implicit knowledge. These findings suggest that one needs to decompose processing style and intuitive components in problem solving to make predictions on effects of intelligence and personality measures.

  15. Microgrid energy dispatching for industrial zones with renewable generations and electric vehicles via stochastic optimization and learning

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Li, Jingzhi; He, Zhubin; Yan, Wanfeng

    2018-07-01

    In this paper, a stochastic optimization framework is proposed to address the microgrid energy dispatching problem with random renewable generation and vehicle activity pattern, which is closer to the practical applications. The patterns of energy generation, consumption and storage availability are all random and unknown at the beginning, and the microgrid controller design (MCD) is formulated as a Markov decision process (MDP). Hence, an online learning-based control algorithm is proposed for the microgrid, which could adapt the control policy with increasing knowledge of the system dynamics and converges to the optimal algorithm. We adopt the linear approximation idea to decompose the original value functions as the summation of each per-battery value function. As a consequence, the computational complexity is significantly reduced from exponential growth to linear growth with respect to the size of battery states. Monte Carlo simulation of different scenarios demonstrates the effectiveness and efficiency of our algorithm.

  16. Layout decomposition of self-aligned double patterning for 2D random logic patterning

    NASA Astrophysics Data System (ADS)

    Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.

    2011-04-01

    Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.

  17. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less

  18. Sparse time-frequency decomposition based on dictionary adaptation.

    PubMed

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  19. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  20. Automatic single-image-based rain streaks removal via image decomposition.

    PubMed

    Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang

    2012-04-01

    Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.

  1. H∞ filtering for discrete-time systems subject to stochastic missing measurements: a decomposition approach

    NASA Astrophysics Data System (ADS)

    Gu, Zhou; Fei, Shumin; Yue, Dong; Tian, Engang

    2014-07-01

    This paper deals with the problem of H∞ filtering for discrete-time systems with stochastic missing measurements. A new missing measurement model is developed by decomposing the interval of the missing rate into several segments. The probability of the missing rate in each subsegment is governed by its corresponding random variables. We aim to design a linear full-order filter such that the estimation error converges to zero exponentially in the mean square with a less conservatism while the disturbance rejection attenuation is constrained to a given level by means of an H∞ performance index. Based on Lyapunov theory, the reliable filter parameters are characterised in terms of the feasibility of a set of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach.

  2. Decision-problem state analysis methodology

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1980-01-01

    A methodology for analyzing a decision-problem state is presented. The methodology is based on the analysis of an incident in terms of the set of decision-problem conditions encountered. By decomposing the events that preceded an unwanted outcome, such as an accident, into the set of decision-problem conditions that were resolved, a more comprehensive understanding is possible. All human-error accidents are not caused by faulty decision-problem resolutions, but it appears to be one of the major areas of accidents cited in the literature. A three-phase methodology is presented which accommodates a wide spectrum of events. It allows for a systems content analysis of the available data to establish: (1) the resolutions made, (2) alternatives not considered, (3) resolutions missed, and (4) possible conditions not considered. The product is a map of the decision-problem conditions that were encountered as well as a projected, assumed set of conditions that should have been considered. The application of this methodology introduces a systematic approach to decomposing the events that transpired prior to the accident. The initial emphasis is on decision and problem resolution. The technique allows for a standardized method of accident into a scenario which may used for review or the development of a training simulation.

  3. Hidden Statistics Approach to Quantum Simulations

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2010-01-01

    Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.

  4. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa

    2018-01-01

    A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

  5. The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.

    PubMed

    Narayanamoorthy, S; Kalyani, S

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.

  6. Mixed H2/H∞ distributed robust model predictive control for polytopic uncertain systems subject to actuator saturation and missing measurements

    NASA Astrophysics Data System (ADS)

    Song, Yan; Fang, Xiaosheng; Diao, Qingda

    2016-03-01

    In this paper, we discuss the mixed H2/H∞ distributed robust model predictive control problem for polytopic uncertain systems subject to randomly occurring actuator saturation and packet loss. The global system is decomposed into several subsystems, and all the subsystems are connected by a fixed topology network, which is the definition for the packet loss among the subsystems. To better use the successfully transmitted information via Internet, both the phenomena of actuator saturation and packet loss resulting from the limitation of the communication bandwidth are taken into consideration. A novel distributed controller model is established to account for the actuator saturation and packet loss in a unified representation by using two sets of Bernoulli distributed white sequences with known conditional probabilities. With the nonlinear feedback control law represented by the convex hull of a group of linear feedback laws, the distributed controllers for subsystems are obtained by solving an linear matrix inequality (LMI) optimisation problem. Finally, numerical studies demonstrate the effectiveness of the proposed techniques.

  7. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  8. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  9. A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions

    NASA Astrophysics Data System (ADS)

    Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya

    Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.

  10. Composting oily sludges: Characterizing microflora using randomly amplified polymorphic DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Persson, A.; Quednau, M.; Ahrne, S.

    1995-12-31

    Laboratory-scale composts in which oily sludge was composted under mesophilic conditions with amendments such as peat, bark, and fresh or decomposed horse manure, were studied with respect to basic parameters such as oil degradation, respirometry, and bacterial numbers. Further, an attempt was made to characterize a part of the bacterial flora using randomly amplified polymorphic DNA (RAPD). The compost based on decomposed horse manure showed the greatest reduction of oil (85%). Comparison with a killed control indicated that microbial degradation actually had occurred. However, a substantial part of the oil was stabilized rather than totally broken down. Volatiles, on themore » contrary, accounted for a rather small percentage (5%) of the observed reduction. RAPD indicated that a selection had taken place and that the dominating microbial flora during the active degradation of oil were not the same as the ones dominating the different basic materials. The stabilized compost, on the other hand, had bacterial flora with similarities to the ones found in peat and bark.« less

  11. Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control

    NASA Technical Reports Server (NTRS)

    Bernstein, Daniel S.; Zilberstein, Shlomo

    2003-01-01

    Weakly-coupled Markov decision processes can be decomposed into subprocesses that interact only through a small set of bottleneck states. We study a hierarchical reinforcement learning algorithm designed to take advantage of this particular type of decomposability. To test our algorithm, we use a decision-making problem faced by autonomous planetary rovers. In this problem, a Mars rover must decide which activities to perform and when to traverse between science sites in order to make the best use of its limited resources. In our experiments, the hierarchical algorithm performs better than Q-learning in the early stages of learning, but unlike Q-learning it converges to a suboptimal policy. This suggests that it may be advantageous to use the hierarchical algorithm when training time is limited.

  12. The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem

    PubMed Central

    Narayanamoorthy, S.; Kalyani, S.

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713

  13. Evolution in fluctuating environments: decomposing selection into additive components of the Robertson-Price equation.

    PubMed

    Engen, Steinar; Saether, Bernt-Erik

    2014-03-01

    We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  14. Tracking Time Evolution of Collective Attention Clusters in Twitter: Time Evolving Nonnegative Matrix Factorisation.

    PubMed

    Saito, Shota; Hirata, Yoshito; Sasahara, Kazutoshi; Suzuki, Hideyuki

    2015-01-01

    Micro-blogging services, such as Twitter, offer opportunities to analyse user behaviour. Discovering and distinguishing behavioural patterns in micro-blogging services is valuable. However, it is difficult and challenging to distinguish users, and to track the temporal development of collective attention within distinct user groups in Twitter. In this paper, we formulate this problem as tracking matrices decomposed by Nonnegative Matrix Factorisation for time-sequential matrix data, and propose a novel extension of Nonnegative Matrix Factorisation, which we refer to as Time Evolving Nonnegative Matrix Factorisation (TENMF). In our method, we describe users and words posted in some time interval by a matrix, and use several matrices as time-sequential data. Subsequently, we apply Time Evolving Nonnegative Matrix Factorisation to these time-sequential matrices. TENMF can decompose time-sequential matrices, and can track the connection among decomposed matrices, whereas previous NMF decomposes a matrix into two lower dimension matrices arbitrarily, which might lose the time-sequential connection. Our proposed method has an adequately good performance on artificial data. Moreover, we present several results and insights from experiments using real data from Twitter.

  15. Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.

    PubMed

    Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang

    2017-01-01

    Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.

  16. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.

  17. Income Transfers and Assets of the Poor. Revised. Discussion Paper.

    ERIC Educational Resources Information Center

    Ziliak, James P.

    Contrary to the predictions of the standard life-cycle model, many low lifetime-income households accumulate little wealth relative to their incomes compared to households with high lifetime income. This paper uses data from the Panel Study of Income Dynamics and a correlated random-effects generalized model of moments estimator to decompose the…

  18. Program Helps Decompose Complicated Design Problems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.

    1993-01-01

    Time saved by intelligent decomposition into smaller, interrelated problems. DeMAID is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Displays modules in N x N matrix format. Requires investment of time to generate and refine list of modules for input, it saves considerable amount of money and time in total design process, particularly new design problems in which ordering of modules has not been defined. Program also implemented to examine assembly-line process or ordering of tasks and milestones.

  19. Domain decomposition in time for PDE-constrained optimization

    DOE PAGES

    Barker, Andrew T.; Stoll, Martin

    2015-08-28

    Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.

  20. Localized motion in random matrix decomposition of complex financial systems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian

    2017-04-01

    With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.

  1. Low order models for uncertainty quantification in acoustic propagation problems

    NASA Astrophysics Data System (ADS)

    Millet, Christophe

    2016-11-01

    Long-range sound propagation problems are characterized by both a large number of length scales and a large number of normal modes. In the atmosphere, these modes are confined within waveguides causing the sound to propagate through multiple paths to the receiver. For uncertain atmospheres, the modes are described as random variables. Concise mathematical models and analysis reveal fundamental limitations in classical projection techniques due to different manifestations of the fact that modes that carry small variance can have important effects on the large variance modes. In the present study, we propose a systematic strategy for obtaining statistically accurate low order models. The normal modes are sorted in decreasing Sobol indices using asymptotic expansions, and the relevant modes are extracted using a modified iterative Krylov-based method. The statistics of acoustic signals are computed by decomposing the original pulse into a truncated sum of modal pulses that can be described by a stationary phase method. As the low-order acoustic model preserves the overall structure of waveforms under perturbations of the atmosphere, it can be applied to uncertainty quantification. The result of this study is a new algorithm which applies on the entire phase space of acoustic fields.

  2. Modeling for Ultrasonic Health Monitoring of Foams with Embedded Sensors

    NASA Technical Reports Server (NTRS)

    Wang, L.; Rokhlin, S. I.; Rokhlin, Stanislav, I.

    2005-01-01

    In this report analytical and numerical methods are proposed to estimate the effective elastic properties of regular and random open-cell foams. The methods are based on the principle of minimum energy and on structural beam models. The analytical solutions are obtained using symbolic processing software. The microstructure of the random foam is simulated using Voronoi tessellation together with a rate-dependent random close-packing algorithm. The statistics of the geometrical properties of random foams corresponding to different packing fractions have been studied. The effects of the packing fraction on elastic properties of the foams have been investigated by decomposing the compliance into bending and axial compliance components. It is shown that the bending compliance increases and the axial compliance decreases when the packing fraction increases. Keywords: Foam; Elastic properties; Finite element; Randomness

  3. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  4. Fully Decomposable Split Graphs

    NASA Astrophysics Data System (ADS)

    Broersma, Hajo; Kratsch, Dieter; Woeginger, Gerhard J.

    We discuss various questions around partitioning a split graph into connected parts. Our main result is a polynomial time algorithm that decides whether a given split graph is fully decomposable, i.e., whether it can be partitioned into connected parts of order α 1,α 2,...,α k for every α 1,α 2,...,α k summing up to the order of the graph. In contrast, we show that the decision problem whether a given split graph can be partitioned into connected parts of order α 1,α 2,...,α k for a given partition α 1,α 2,...,α k of the order of the graph, is NP-hard.

  5. Program Helps Decompose Complex Design Systems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Hall, Laura E.

    1994-01-01

    DeMAID (A Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Groups modular subsystems on basis of interactions among them. Saves considerable money and time in total design process, particularly in new design problem in which order of modules has not been defined. Available in two machine versions: Macintosh and Sun.

  6. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  7. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  8. How selection structures species abundance distributions

    PubMed Central

    Magurran, Anne E.; Henderson, Peter A.

    2012-01-01

    How do species divide resources to produce the characteristic species abundance distributions seen in nature? One way to resolve this problem is to examine how the biomass (or capacity) of the spatial guilds that combine to produce an abundance distribution is allocated among species. Here we argue that selection on body size varies across guilds occupying spatially distinct habitats. Using an exceptionally well-characterized estuarine fish community, we show that biomass is concentrated in large bodied species in guilds where habitat structure provides protection from predators, but not in those guilds associated with open habitats and where safety in numbers is a mechanism for reducing predation risk. We further demonstrate that while there is temporal turnover in the abundances and identities of species that comprise these guilds, guild rank order is conserved across our 30-year time series. These results demonstrate that ecological communities are not randomly assembled but can be decomposed into guilds where capacity is predictably allocated among species. PMID:22787020

  9. A stochastic approach to noise modeling for barometric altimeters.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2013-11-18

    The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.

  10. A Volunteer Computing Project for Solving Geoacoustic Inversion Problems

    NASA Astrophysics Data System (ADS)

    Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya

    2017-12-01

    A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.

  11. Teaching Analytical Thinking

    ERIC Educational Resources Information Center

    Behn, Robert D.; Vaupel, James W.

    1976-01-01

    Description of the philosophy and general nature of a course at Drake University that emphasizes basic concepts of analytical thinking, including think, decompose, simplify, specify, and rethink problems. Some sample homework exercises are included. The journal is available from University of California Press, Berkeley, California 94720.…

  12. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  13. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  14. Autonomous Information Unit: Why Making Data Smart Can also Make Data Secured?

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.

    2006-01-01

    In this paper, we introduce a new fine-grain distributed information protection mechanism which can self-protect, self-discover, self-organize, and self-manage. In our approach, we decompose data into smaller pieces and provide individualized protection. We also provide a policy control mechanism to allow 'smart' access control and context based re-assembly of the decomposed data. By combining smart policy with individually protected data, we are able to provide better protection of sensitive information and achieve more flexible access during emergency conditions. As a result, this new fine-grain protection mechanism can enable us to achieve better solutions for problems such as distributed information protection and identity theft.

  15. Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert

    2002-01-01

    The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.

  16. Decomposability and scalability in space-based observatory scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.

    1992-01-01

    In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.

  17. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  18. Hidden Statistics of Schroedinger Equation

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2011-01-01

    Work was carried out in determination of the mathematical origin of randomness in quantum mechanics and creating a hidden statistics of Schr dinger equation; i.e., to expose the transitional stochastic process as a "bridge" to the quantum world. The governing equations of hidden statistics would preserve such properties of quantum physics as superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods.

  19. The Processes Involved in Designing Software.

    DTIC Science & Technology

    1980-08-01

    repeats Itself at the next level, terminating with a plan whose individual steps can be executed to solve the Initial problem. Hayes-Roth and Hayes-Roth...that the original design problem is decomposed into a collection of well structured subproblems under the control of some type of executive process...given element to refine further, the schema is assumed to execute to completion, developing a solution model for that element and refining it into a

  20. Intercellular Variability in Protein Levels from Stochastic Expression and Noisy Cell Cycle Processes

    PubMed Central

    Soltani, Mohammad; Vargas-Garcia, Cesar A.; Antunes, Duarte; Singh, Abhyudai

    2016-01-01

    Inside individual cells, expression of genes is inherently stochastic and manifests as cell-to-cell variability or noise in protein copy numbers. Since proteins half-lives can be comparable to the cell-cycle length, randomness in cell-division times generates additional intercellular variability in protein levels. Moreover, as many mRNA/protein species are expressed at low-copy numbers, errors incurred in partitioning of molecules between two daughter cells are significant. We derive analytical formulas for the total noise in protein levels when the cell-cycle duration follows a general class of probability distributions. Using a novel hybrid approach the total noise is decomposed into components arising from i) stochastic expression; ii) partitioning errors at the time of cell division and iii) random cell-division events. These formulas reveal that random cell-division times not only generate additional extrinsic noise, but also critically affect the mean protein copy numbers and intrinsic noise components. Counter intuitively, in some parameter regimes, noise in protein levels can decrease as cell-division times become more stochastic. Computations are extended to consider genome duplication, where transcription rate is increased at a random point in the cell cycle. We systematically investigate how the timing of genome duplication influences different protein noise components. Intriguingly, results show that noise contribution from stochastic expression is minimized at an optimal genome-duplication time. Our theoretical results motivate new experimental methods for decomposing protein noise levels from synchronized and asynchronized single-cell expression data. Characterizing the contributions of individual noise mechanisms will lead to precise estimates of gene expression parameters and techniques for altering stochasticity to change phenotype of individual cells. PMID:27536771

  1. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  2. Pauses and Intonational Phrasing: ERP Studies in 5-Month-Old German Infants and Adults

    ERIC Educational Resources Information Center

    Mannel, Claudia; Friederici, Angela D.

    2009-01-01

    In language learning, infants are faced with the challenge of decomposing continuous speech into relevant units, such as syntactic clauses and words. Within the framework of prosodic bootstrapping, behavioral studies suggest infants approach this segmentation problem by relying on prosodic information, especially on acoustically marked…

  3. Decomposability and convex structure of thermal processes

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Horodecki, Michał

    2018-05-01

    We present an example of a thermal process (TP) for a system of d energy levels, which cannot be performed without an instant access to the whole energy space. This TP is uniquely connected with a transition between some states of the system, that cannot be performed without access to the whole energy space even when approximate transitions are allowed. Pursuing the question about the decomposability of TPs into convex combinations of compositions of processes acting non-trivially on smaller subspaces, we investigate transitions within the subspace of states diagonal in the energy basis. For three level systems, we determine the set of extremal points of these operations, as well as the minimal set of operations needed to perform an arbitrary TP, and connect the set of TPs with thermomajorization criterion. We show that the structure of the set depends on temperature, which is associated with the fact that TPs cannot increase deterministically extractable work from a state—the conclusion that holds for arbitrary d level system. We also connect the decomposability problem with detailed balance symmetry of an extremal TPs.

  4. Working Papers in Speech Recognition. IV. The Hearsay II System

    DTIC Science & Technology

    1976-02-01

    implementation of this model (Reddy, Erman, and Neely (73); Reddy, Er- man, Fennell , and Neely (73); Neely [73); Erman |74J). This system, which was the... Fennell . Erman, and Rea- dy (74|). Hearsayll is also based on the Hearsay model: it generalizes and extends many of the con- cepts which exist in a...difficulty of decomposing large problems for such machines. Erman, Fennell , Lesser, and Reddy [73] describe this problem and outline some early solutions

  5. Multicriteria hierarchical iterative interactive algorithm for organizing operational modes of large heat supply systems

    NASA Astrophysics Data System (ADS)

    Korotkova, T. I.; Popova, V. I.

    2017-11-01

    The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.

  6. The hypergraph regularity method and its applications

    PubMed Central

    Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.

    2005-01-01

    Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821

  7. Segmentation of Large Unstructured Point Clouds Using Octree-Based Region Growing and Conditional Random Fields

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.

    2017-11-01

    Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.

  8. Program Helps Decompose Complex Design Systems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Hall, Laura E.

    1995-01-01

    DeMAID (Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problems such as large platforms in outer space. Groups modular subsystems on basis of interactions among them. Saves considerable amount of money and time in total design process, particularly in new design problem in which order of modules has not been defined. Originally written for design problems, also applicable to problems containing modules (processes) that take inputs and generate outputs. Available in three machine versions: Macintosh written in Symantec's Think C 3.01, Sun, and SGI IRIS in C language.

  9. Investigating the Conceptual Variation of Major Physics Textbooks

    NASA Astrophysics Data System (ADS)

    Stewart, John; Campbell, Richard; Clanton, Jessica

    2008-04-01

    The conceptual problem content of the electricity and magnetism chapters of seven major physics textbooks was investigated. The textbooks presented a total of 1600 conceptual electricity and magnetism problems. The solution to each problem was decomposed into its fundamental reasoning steps. These fundamental steps are, then, used to quantify the distribution of conceptual content among the set of topics common to the texts. The variation of the distribution of conceptual coverage within each text is studied. The variation between the major groupings of the textbooks (conceptual, algebra-based, and calculus-based) is also studied. A measure of the conceptual complexity of the problems in each text is presented.

  10. Linear decomposition approach for a class of nonconvex programming problems.

    PubMed

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  11. Application of lifting wavelet and random forest in compound fault diagnosis of gearbox

    NASA Astrophysics Data System (ADS)

    Chen, Tang; Cui, Yulian; Feng, Fuzhou; Wu, Chunzhi

    2018-03-01

    Aiming at the weakness of compound fault characteristic signals of a gearbox of an armored vehicle and difficult to identify fault types, a fault diagnosis method based on lifting wavelet and random forest is proposed. First of all, this method uses the lifting wavelet transform to decompose the original vibration signal in multi-layers, reconstructs the multi-layer low-frequency and high-frequency components obtained by the decomposition to get multiple component signals. Then the time-domain feature parameters are obtained for each component signal to form multiple feature vectors, which is input into the random forest pattern recognition classifier to determine the compound fault type. Finally, a variety of compound fault data of the gearbox fault analog test platform are verified, the results show that the recognition accuracy of the fault diagnosis method combined with the lifting wavelet and the random forest is up to 99.99%.

  12. Randomized Dynamic Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.

  13. A structural model decomposition framework for systems health management

    NASA Astrophysics Data System (ADS)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  14. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  15. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  16. Innovating Method of Existing Mechanical Product Based on TRIZ Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Cunyou; Shi, Dongyan; Wu, Han

    Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.

  17. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  18. Dynamic Flow Management Problems in Air Transportation

    NASA Technical Reports Server (NTRS)

    Patterson, Sarah Stock

    1997-01-01

    In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer programming formulation, the solution of which generates feasible and near-optimal routes for individual flights. The algorithm, termed the Lagrangian Generation Algorithm, is used to solve practical problems in the southwestern portion of United States in which the solutions are within 1% of the corresponding lower bounds.

  19. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  20. Discrete-time entropy formulation of optimal and adaptive control problems

    NASA Technical Reports Server (NTRS)

    Tsai, Yweting A.; Casiello, Francisco A.; Loparo, Kenneth A.

    1992-01-01

    The discrete-time version of the entropy formulation of optimal control of problems developed by G. N. Saridis (1988) is discussed. Given a dynamical system, the uncertainty in the selection of the control is characterized by the probability distribution (density) function which maximizes the total entropy. The equivalence between the optimal control problem and the optimal entropy problem is established, and the total entropy is decomposed into a term associated with the certainty equivalent control law, the entropy of estimation, and the so-called equivocation of the active transmission of information from the controller to the estimator. This provides a useful framework for studying the certainty equivalent and adaptive control laws.

  1. Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems

    NASA Technical Reports Server (NTRS)

    Chen, Hsin-Chu; He, Ai-Fang

    1993-01-01

    The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.

  2. Decomposing Slavic Aspect: The Role of Aspectual Morphology in Polish and Other Slavic Languages

    ERIC Educational Resources Information Center

    Lazorczyk, Agnieszka Agata

    2010-01-01

    This dissertation considers the problem of the semantic function of verbal aspectual morphology in Polish and other Slavic languages in the framework of generative syntax and semantics. Three kinds of such morphology are examined: (i) prefixes attaching directly to the root, (ii) "secondary imperfective" suffixes, and (iii) three prefixes that…

  3. Parallel Logic Programming Architecture

    DTIC Science & Technology

    1990-04-01

    Section 3.1. 3.1. A STATIC ALLOCATION SCHEME (SAS) Methods that have been used for decomposing distributed problems in artificial intelligence...multiple agents, knowledge organization and allocation, and cooperative parallel execution. These difficulties are common to distributed artificial ...for the following reasons. First, intellegent backtracking requires much more bookkeeping and is therefore more costly during consult-time and during

  4. The Second Conference on the Environmental Chemistry of Hydrazine Fuels; 15 February 1979.

    DTIC Science & Technology

    1982-04-01

    tank by a moving piston in the tank. The hydrazine trave’s to a gas generator where it decomposes on an iridium /alumina catalyst. The gas is used to...possibility of nitrogen trichloride formation and presented control instrument problems since commercially available instru- ments required p11 of about 5

  5. Construct DTPB Model by Using DEMATEL: A Study of a University Library Website

    ERIC Educational Resources Information Center

    Lee, Yu-Cheng; Hsieh, Yi-Fang; Guo, Yau-Bin

    2013-01-01

    Purpose: Traditional studies on a decomposed theory of planned behavior (DTPB) analyze the relationship of variables through a structural equation model. If certain variables do not fully comply with the independent hypothesis, it is not possible to conduct proper analysis, which leads to false conclusions. To solve these problems, the aim of this…

  6. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  7. Multiscale structure of time series revealed by the monotony spectrum.

    PubMed

    Vamoş, Călin

    2017-03-01

    Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  9. A random approach of test macro generation for early detection of hotspots

    NASA Astrophysics Data System (ADS)

    Lee, Jong-hyun; Kim, Chin; Kang, Minsoo; Hwang, Sungwook; Yang, Jae-seok; Harb, Mohammed; Al-Imam, Mohamed; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe

    2016-03-01

    Multiple-Patterning Technology (MPT) is still the preferred choice over EUV for the advanced technology nodes, starting the 20nm node. Down the way to 7nm and 5nm nodes, Self-Aligned Multiple Patterning (SAMP) appears to be one of the effective multiple patterning techniques in terms of achieving small pitch of printed lines on wafer, yet its yield is in question. Predicting and enhancing the yield in the early stages of technology development are some of the main objectives for creating test macros on test masks. While conventional yield ramp techniques for a new technology node have relied on using designs from previous technology nodes as a starting point to identify patterns for Design of Experiment (DoE) creation, these techniques are challenging to apply in the case of introducing an MPT technique like SAMP that did not exist in previous nodes. This paper presents a new strategy for generating test structures based on random placement of unit patterns that can construct more meaningful bigger patterns. Specifications governing the relationships between those unit patterns can be adjusted to generate layout clips that look like realistic SAMP designs. A via chain can be constructed to connect the random DoE of SAMP structures through a routing layer to external pads for electrical measurement. These clips are decomposed according to the decomposition rules of the technology into the appropriate mandrel and cut masks. The decomposed clips can be tested through simulations, or electrically on silicon to discover hotspots. The hotspots can be used in optimizing the fabrication process and models to fix them. They can also be used as learning patterns for DFM deck development. By expanding the size of the randomly generated test structures, more hotspots can be detected. This should provide a faster way to enhance the yield of a new technology node.

  10. Correlated Noise: How it Breaks NMF, and What to Do About It.

    PubMed

    Plis, Sergey M; Potluru, Vamsi K; Lane, Terran; Calhoun, Vince D

    2011-01-12

    Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data.

  11. Correlated Noise: How it Breaks NMF, and What to Do About It

    PubMed Central

    Plis, Sergey M.; Potluru, Vamsi K.; Lane, Terran; Calhoun, Vince D.

    2010-01-01

    Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data. PMID:23750288

  12. Finite element analysis of periodic transonic flow problems

    NASA Technical Reports Server (NTRS)

    Fix, G. J.

    1978-01-01

    Flow about an oscillating thin airfoil in a transonic stream was considered. It was assumed that the flow field can be decomposed into a mean flow plus a periodic perturbation. On the surface of the airfoil the usual Neumman conditions are imposed. Two computer programs were written, both using linear basis functions over triangles for the finite element space. The first program uses a banded Gaussian elimination solver to solve the matrix problem, while the second uses an iterative technique, namely SOR. The only results obtained are for an oscillating flat plate.

  13. Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Enßlin, Torsten A.

    2015-02-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74

  14. Flexible configuration-interaction shell-model many-body solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.

    BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.

  15. Improving engineering system design by formal decomposition, sensitivity analysis, and optimization

    NASA Technical Reports Server (NTRS)

    Sobieski, J.; Barthelemy, J. F. M.

    1985-01-01

    A method for use in the design of a complex engineering system by decomposing the problem into a set of smaller subproblems is presented. Coupling of the subproblems is preserved by means of the sensitivity derivatives of the subproblem solution to the inputs received from the system. The method allows for the division of work among many people and computers.

  16. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  17. Requirements Analysis and Modeling with Problem Frames and SysML: A Case Study

    NASA Astrophysics Data System (ADS)

    Colombo, Pietro; Khendek, Ferhat; Lavazza, Luigi

    Requirements analysis based on Problem Frames is getting an increasing attention in the academic community and has the potential to become of relevant interest also for industry. However the approach lacks an adequate notational support and methodological guidelines, and case studies that demonstrate its applicability to problems of realistic complexity are still rare. These weaknesses may hinder its adoption. This paper aims at contributing towards the elimination of these weaknesses. We report on an experience in analyzing and specifying the requirements of a controller for traffic lights of an intersection using Problem Frames in combination with SysML. The analysis was performed by decomposing the problem, addressing the identified sub-problems, and recomposing them while solving the identified interferences. The experience allowed us to identify certain guidelines for decomposition and re-composition patterns.

  18. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  19. Using Temporal Correlations and Full Distributions to Separate Intrinsic and Extrinsic Fluctuations in Biological Systems

    NASA Astrophysics Data System (ADS)

    Hilfinger, Andreas; Chen, Mark; Paulsson, Johan

    2012-12-01

    Studies of stochastic biological dynamics typically compare observed fluctuations to theoretically predicted variances, sometimes after separating the intrinsic randomness of the system from the enslaving influence of changing environments. But variances have been shown to discriminate surprisingly poorly between alternative mechanisms, while for other system properties no approaches exist that rigorously disentangle environmental influences from intrinsic effects. Here, we apply the theory of generalized random walks in random environments to derive exact rules for decomposing time series and higher statistics, rather than just variances. We show for which properties and for which classes of systems intrinsic fluctuations can be analyzed without accounting for extrinsic stochasticity and vice versa. We derive two independent experimental methods to measure the separate noise contributions and show how to use the additional information in temporal correlations to detect multiplicative effects in dynamical systems.

  20. A Coral Reef Algorithm Based on Learning Automata for the Coverage Control Problem of Heterogeneous Directional Sensor Networks

    PubMed Central

    Li, Ming; Miao, Chunyan; Leung, Cyril

    2015-01-01

    Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches. PMID:26690162

  1. A Coral Reef Algorithm Based on Learning Automata for the Coverage Control Problem of Heterogeneous Directional Sensor Networks.

    PubMed

    Li, Ming; Miao, Chunyan; Leung, Cyril

    2015-12-04

    Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches.

  2. Application of supercritical water to decompose brominated epoxy resin and environmental friendly recovery of metals from waste memory module.

    PubMed

    Li, Kuo; Xu, Zhenming

    2015-02-03

    Waste Memory Modules (WMMs), a particular kind of waste printed circuit board (WPCB), contain a high amount of brominated epoxy resin (BER), which may bring a series of environmental and health problems. On the other hand, metals like gold and copper are very valuable and are important to recover from WMMs. In the present study, an effective and environmental friendly method using supercritical water (SCW) to decompose BER and recover metals from WMMs was developed instead of hydrometallurgy or pyrometallurgy simultaneously. Experiments were conducted under external-catalyst-free conditions with temperatures ranging from 350 to 550 °C, pressures from 25 to 40 MPa, and reaction times from 120 to 360 min in a semibatch-type reactor. The results showed that BER could be quickly and efficiently decomposed under SCW condition, and the mechanism was possibly free radical reaction. After the SCW treatments, the glass fibers and metal foils in the solid residue could be easily liberated and recovered, respectively. The metal recovery rate reached 99.80%. The optimal parameters were determined as 495 °C, 33 MPa, and 305 min on the basis of response surface methodology (RSM). This study provides an efficient and environmental friendly approach for WMMs recycling compared with electrolysis, pyrometallurgy, and hydrometallurgy.

  3. Performance optimization of the power user electric energy data acquire system based on MOEA/D evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Zhongan; Gao, Chen; Yan, Shengteng; Yang, Canrong

    2017-10-01

    The power user electric energy data acquire system (PUEEDAS) is an important part of smart grid. This paper builds a multi-objective optimization model for the performance of the PUEEADS from the point of view of the combination of the comprehensive benefits and cost. Meanwhile, the Chebyshev decomposition approach is used to decompose the multi-objective optimization problem. We design a MOEA/D evolutionary algorithm to solve the problem. By analyzing the Pareto optimal solution set of multi-objective optimization problem and comparing it with the monitoring value to grasp the direction of optimizing the performance of the PUEEDAS. Finally, an example is designed for specific analysis.

  4. Distributed Task Offloading in Heterogeneous Vehicular Crowd Sensing

    PubMed Central

    Liu, Yazhi; Wang, Wendong; Ma, Yuekun; Yang, Zhigang; Yu, Fuxing

    2016-01-01

    The ability of road vehicles to efficiently execute different sensing tasks varies because of the heterogeneity in their sensing ability and trajectories. Therefore, the data collection sensing task, which requires tempo-spatial sensing data, becomes a serious problem in vehicular sensing systems, particularly those with limited sensing capabilities. A utility-based sensing task decomposition and offloading algorithm is proposed in this paper. The utility function for a task executed by a certain vehicle is built according to the mobility traces and sensing interfaces of the vehicle, as well as the sensing data type and tempo-spatial coverage requirements of the sensing task. Then, the sensing tasks are decomposed and offloaded to neighboring vehicles according to the utilities of the neighboring vehicles to the decomposed sensing tasks. Real trace-driven simulation shows that the proposed task offloading is able to collect much more comprehensive and uniformly distributed sensing data than other algorithms. PMID:27428967

  5. A unified model for transfer alignment at random misalignment angles based on second-order EKF

    NASA Astrophysics Data System (ADS)

    Cui, Xiao; Mei, Chunbo; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo

    2017-04-01

    In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles.

  6. American option pricing in Gauss-Markov interest rate models

    NASA Astrophysics Data System (ADS)

    Galluccio, Stefano

    1999-07-01

    In the context of Gaussian non-homogeneous interest-rate models, we study the problem of American bond option pricing. In particular, we show how to efficiently compute the exercise boundary in these models in order to decompose the price as a sum of a European option and an American premium. Generalizations to coupon-bearing bonds and jump-diffusion processes for the interest rates are also discussed.

  7. MAUD: An Interactive Computer Program for the Structuring, Decomposition, and Recomposition of Preferences between Multiattributed Alternatives. Final Report. Technical Report 543.

    ERIC Educational Resources Information Center

    Humphreys, Patrick; Wisudha, Ayleen

    As a demonstration of the application of heuristic devices to decision-theoretical techniques, an interactive computer program known as MAUD (Multiattribute Utility Decomposition) has been designed to support decision or choice problems that can be decomposed into component factors, or to act as a tool for investigating the microstructure of a…

  8. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  9. Mediation and spillover effects in group-randomized trials: a case study of the 4Rs educational intervention

    PubMed Central

    VanderWeele, Tyler J.; Hong, Guanglei; Jones, Stephanie M.; Brown, Joshua L.

    2013-01-01

    Peer influence and social interactions can give rise to spillover effects in which the exposure of one individual may affect outcomes of other individuals. Even if the intervention under study occurs at the group or cluster level as in group-randomized trials, spillover effects can occur when the mediator of interest is measured at a lower level than the treatment. Evaluators who choose groups rather than individuals as experimental units in a randomized trial often anticipate that the desirable changes in targeted social behaviors will be reinforced through interference among individuals in a group exposed to the same treatment. In an empirical evaluation of the effect of a school-wide intervention on reducing individual students’ depressive symptoms, schools in matched pairs were randomly assigned to the 4Rs intervention or the control condition. Class quality was hypothesized as an important mediator assessed at the classroom level. We reason that the quality of one classroom may affect outcomes of children in another classroom because children interact not simply with their classmates but also with those from other classes in the hallways or on the playground. In investigating the role of class quality as a mediator, failure to account for such spillover effects of one classroom on the outcomes of children in other classrooms can potentially result in bias and problems with interpretation. Using a counterfactual conceptualization of direct, indirect and spillover effects, we provide a framework that can accommodate issues of mediation and spillover effects in group randomized trials. We show that the total effect can be decomposed into a natural direct effect, a within-classroom mediated effect and a spillover mediated effect. We give identification conditions for each of the causal effects of interest and provide results on the consequences of ignoring “interference” or “spillover effects” when they are in fact present. Our modeling approach disentangles these effects. The analysis examines whether the 4Rs intervention has an effect on children's depressive symptoms through changing the quality of other classes as well as through changing the quality of a child's own class. PMID:23997375

  10. Cognitive mechanisms of insight: the role of heuristics and representational change in solving the eight-coin problem.

    PubMed

    Öllinger, Michael; Jones, Gary; Faber, Amory H; Knoblich, Günther

    2013-05-01

    The 8-coin insight problem requires the problem solver to move 2 coins so that each coin touches exactly 3 others. Ormerod, MacGregor, and Chronicle (2002) explained differences in task performance across different versions of the 8-coin problem using the availability of particular moves in a 2-dimensional search space. We explored 2 further explanations by developing 6 new versions of the 8-coin problem in order to investigate the influence of grouping and self-imposed constraints on solutions. The results identified 2 sources of problem difficulty: first, the necessity to overcome the constraint that a solution can be found in 2-dimensional space and, second, the necessity to decompose perceptual groupings. A detailed move analysis suggested that the selection of moves was driven by the established representation rather than the application of the appropriate heuristics. Both results support the assumptions of representational change theory (Ohlsson, 1992).

  11. Object Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, Moritz; Heipke, Christian; Geiger, Andreas

    2018-06-01

    This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.

  12. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  13. An Integrated Constraint Programming Approach to Scheduling Sports Leagues with Divisional and Round-robin Tournaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.

  14. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  15. Model reduction for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Williams, Trevor

    1992-01-01

    Model reduction is an important practical problem in the control of flexible spacecraft, and a considerable amount of work has been carried out on this topic. Two of the best known methods developed are modal truncation and internal balancing. Modal truncation is simple to implement but can give poor results when the structure possesses clustered natural frequencies, as often occurs in practice. Balancing avoids this problem but has the disadvantages of high computational cost, possible numerical sensitivity problems, and no physical interpretation for the resulting balanced 'modes'. The purpose of this work is to examine the performance of the subsystem balancing technique developed by the investigator when tested on a realistic flexible space structure, in this case a model of the Permanently Manned Configuration (PMC) of Space Station Freedom. This method retains the desirable properties of standard balancing while overcoming the three difficulties listed above. It achieves this by first decomposing the structural model into subsystems of highly correlated modes. Each subsystem is approximately uncorrelated from all others, so balancing them separately and then combining yields comparable results to balancing the entire structure directly. The operation count reduction obtained by the new technique is considerable: a factor of roughly r(exp 2) if the system decomposes into r equal subsystems. Numerical accuracy is also improved significantly, as the matrices being operated on are of reduced dimension, and the modes of the reduced-order model now have a clear physical interpretation; they are, to first order, linear combinations of repeated-frequency modes.

  16. Coherent vorticity extraction in resistive drift-wave turbulence: Comparison of orthogonal wavelets versus proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B

    2011-01-01

    We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less

  17. Infrared small target detection in heavy sky scene clutter based on sparse representation

    NASA Astrophysics Data System (ADS)

    Liu, Depeng; Li, Zhengzhou; Liu, Bing; Chen, Wenhao; Liu, Tianmei; Cao, Lei

    2017-09-01

    A novel infrared small target detection method based on sky clutter and target sparse representation is proposed in this paper to cope with the representing uncertainty of clutter and target. The sky scene background clutter is described by fractal random field, and it is perceived and eliminated via the sparse representation on fractal background over-complete dictionary (FBOD). The infrared small target signal is simulated by generalized Gaussian intensity model, and it is expressed by the generalized Gaussian target over-complete dictionary (GGTOD), which could describe small target more efficiently than traditional structured dictionaries. Infrared image is decomposed on the union of FBOD and GGTOD, and the sparse representation energy that target signal and background clutter decomposed on GGTOD differ so distinctly that it is adopted to distinguish target from clutter. Some experiments are induced and the experimental results show that the proposed approach could improve the small target detection performance especially under heavy clutter for background clutter could be efficiently perceived and suppressed by FBOD and the changing target could also be represented accurately by GGTOD.

  18. Group Decision Support System to Aid the Process of Design and Maintenance of Large Scale Systems

    DTIC Science & Technology

    1992-03-23

    from a fuzzy set of user requirements. The overall objective of the project is to develop a system combining the characteristics of a compact computer... AHP ) for hierarchical prioritization. 4) Individual Evaluation and Selection of Alternatives - Allows the decision maker to individually evaluate...its concept of outranking relations. The AHP method supports complex decision problems by successively decomposing and synthesizing various elements

  19. High-quality compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  20. Processing method for superconducting ceramics

    DOEpatents

    Bloom, Ira D.; Poeppel, Roger B.; Flandermeyer, Brian K.

    1993-01-01

    A process for preparing a superconducting ceramic and particularly YBa.sub.2 Cu.sub.3 O.sub.7-.delta., where .delta. is in the order of about 0.1-0.4, is carried out using a polymeric binder which decomposes below its ignition point to reduce carbon residue between the grains of the sintered ceramic and a nonhydroxylic organic solvent to limit the problems with water or certain alcohols on the ceramic composition.

  1. Processing method for superconducting ceramics

    DOEpatents

    Bloom, Ira D.; Poeppel, Roger B.; Flandermeyer, Brian K.

    1993-02-02

    A process for preparing a superconducting ceramic and particularly YBa.sub.2 Cu.sub.3 O.sub.7-.delta., where .delta. is in the order of about 0.1-0.4, is carried out using a polymeric binder which decomposes below its ignition point to reduce carbon residue between the grains of the sintered ceramic and a nonhydroxylic organic solvent to limit the problems with water or certain alcohols on the ceramic composition.

  2. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    PubMed Central

    Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng

    2016-01-01

    The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063

  3. MO-FG-204-06: A New Algorithm for Gold Nano-Particle Concentration Identification in Dual Energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L; Shen, C; Ng, M

    Purpose: Gold nano-particle (GNP) has recently attracted a lot of attentions due to its potential as an imaging contrast agent and radiotherapy sensitiser. Imaging the GNP at its low contraction is a challenging problem. We propose a new algorithm to improve the identification of GNP based on dual energy CT (DECT). Methods: We consider three base materials: water, bone, and gold. Determining three density images from two images in DECT is an under-determined problem. We propose to solve this problem by exploring image domain sparsity via an optimization approach. The objective function contains four terms. A data-fidelity term ensures themore » fidelity between the identified material densities and the DECT images, while the other three terms enforces the sparsity in the gradient domain of the three images corresponding to the density of the base materials by using total variation (TV) regularization. A primal-dual algorithm is applied to solve the proposed optimization problem. We have performed simulation studies to test this model. Results: Our digital phantom in the tests contains water, bone regions and gold inserts of different sizes and densities. The gold inserts contain mixed material consisting of water with 1g/cm3 and gold at a certain density. At a low gold density of 0.0008 g/cm3, the insert is hardly visible in DECT images, especially for those with small sizes. Our algorithm is able to decompose the DECT into three density images. Those gold inserts at a low density can be clearly visualized in the density image. Conclusion: We have developed a new algorithm to decompose DECT images into three different material density images, in particular, to retrieve density of gold. Numerical studies showed promising results.« less

  4. Problem decomposition by mutual information and force-based clustering

    NASA Astrophysics Data System (ADS)

    Otero, Richard Edward

    The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.

  5. Influence diagnostics for count data under AB-BA crossover trials.

    PubMed

    Hao, Chengcheng; von Rosen, Dietrich; von Rosen, Tatjana

    2017-12-01

    This paper aims to develop diagnostic measures to assess the influence of data perturbations on estimates in AB-BA crossover studies with a Poisson distributed response. Generalised mixed linear models with normally distributed random effects are utilised. We show that in this special case, the model can be decomposed into two independent sub-models which allow to derive closed-form expressions to evaluate the changes in the maximum likelihood estimates under several perturbation schemes. The performance of the new influence measures is illustrated by simulation studies and the analysis of a real dataset.

  6. Vortex-Density Fluctuations, Energy Spectra, and Vortical Regions in Superfluid Turbulence

    NASA Astrophysics Data System (ADS)

    Baggaley, Andrew W.; Laurie, Jason; Barenghi, Carlo F.

    2012-11-01

    Measurements of the energy spectrum and of the vortex-density fluctuation spectrum in superfluid turbulence seem to contradict each other. Using a numerical model, we show that at each instance of time the total vortex line density can be decomposed into two parts: one formed by metastable bundles of coherent vortices, and one in which the vortices are randomly oriented. We show that the former is responsible for the observed Kolmogorov energy spectrum, and the latter for the spectrum of the vortex line density fluctuations.

  7. Characteristic-eddy decomposition of turbulence in a channel

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Moser, Robert D.

    1989-01-01

    Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.

  8. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    NASA Astrophysics Data System (ADS)

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  9. Proposals for Solutions to Problems Related to the Use of F-34 (SFP) and High Sulphur Diesel on Ground Equipment Using Advanced Reduction Emission Technologies (Propositions de solutions aux problemes lies a l’utilisation de F-34 (SFP) et de diesel a haute teneur en soufre pour le materiel terrestre disposant de technologies avancees de reduction des emissions)

    DTIC Science & Technology

    2008-09-01

    In a two - stage process the urea decomposes to ammonia (NH3) which then reacts with the nitrogen oxides (NOx) and leads to formation of nitrogen and...Sulphur Fuel (HSF) is a potential problem to NATO forces when vehicles and equipment are fitted with advanced emission reduction devices that require Low...worldwide available, standard fuel (F-34) and equipment capable of using such high sulphur fuels (HSF). Recommendations • Future equipment fitted with

  10. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  11. Exploiting Quantum Resonance to Solve Combinatorial Problems

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Fijany, Amir

    2006-01-01

    Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.

  12. Removal of methylmercury and tributyltin (TBT) using marine microorganisms.

    PubMed

    Lee, Seong Eon; Chung, Jin Wook; Won, Ho Shik; Lee, Dong Sup; Lee, Yong-Woo

    2012-02-01

    Two marine species of bacteria were isolated that are capable of degrading organometallic contaminants: Pseudomonas balearica, which decomposes methylmercury; and Shewanella putrefaciens, which decomposes tributyltin. P. balearica decomposed 97% of methylmercury (20.0 μg/L) into inorganic mercury after 3 h, while S. putrefaciens decomposed 88% of tributyltin (55.3 μg Sn/L) in real wastewater after 36 h. These data indicate that the two bacteria efficiently decomposed the targeted substances and may be applied to real wastewater.

  13. Sulfate minerals: a problem for the detection of organic compounds on Mars?

    PubMed

    Lewis, James M T; Watson, Jonathan S; Najorka, Jens; Luong, Duy; Sephton, Mark A

    2015-03-01

    The search for in situ organic matter on Mars involves encounters with minerals and requires an understanding of their influence on lander and rover experiments. Inorganic host materials can be helpful by aiding the preservation of organic compounds or unhelpful by causing the destruction of organic matter during thermal extraction steps. Perchlorates are recognized as confounding minerals for thermal degradation studies. On heating, perchlorates can decompose to produce oxygen, which then oxidizes organic matter. Other common minerals on Mars, such as sulfates, may also produce oxygen upon thermal decay, presenting an additional complication. Different sulfate species decompose within a large range of temperatures. We performed a series of experiments on a sample containing the ferric sulfate jarosite. The sulfate ions within jarosite break down from 500 °C. Carbon dioxide detected during heating of the sample was attributed to oxidation of organic matter. A laboratory standard of ferric sulfate hydrate released sulfur dioxide from 550 °C, and an oxygen peak was detected in the products. Calcium sulfate did not decompose below 1000 °C. Oxygen released from sulfate minerals may have already affected organic compound detection during in situ thermal experiments on Mars missions. A combination of preliminary mineralogical analyses and suitably selected pyrolysis temperatures may increase future success in the search for past or present life on Mars.

  14. A methodology to find the elementary landscape decomposition of combinatorial optimization problems.

    PubMed

    Chicano, Francisco; Whitley, L Darrell; Alba, Enrique

    2011-01-01

    A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.

  15. Macroscopic damping model for structural dynamics with random polycrystalline configurations

    NASA Astrophysics Data System (ADS)

    Yang, Yantao; Cui, Junzhi; Yu, Yifan; Xiang, Meizhen

    2018-06-01

    In this paper the macroscopic damping model for dynamical behavior of the structures with random polycrystalline configurations at micro-nano scales is established. First, the global motion equation of a crystal is decomposed into a set of motion equations with independent single degree of freedom (SDOF) along normal discrete modes, and then damping behavior is introduced into each SDOF motion. Through the interpolation of discrete modes, the continuous representation of damping effects for the crystal is obtained. Second, from energy conservation law the expression of the damping coefficient is derived, and the approximate formula of damping coefficient is given. Next, the continuous damping coefficient for polycrystalline cluster is expressed, the continuous dynamical equation with damping term is obtained, and then the concrete damping coefficients for a polycrystalline Cu sample are shown. Finally, by using statistical two-scale homogenization method, the macroscopic homogenized dynamical equation containing damping term for the structures with random polycrystalline configurations at micro-nano scales is set up.

  16. Decomposed bodies--still an unrewarding autopsy?

    PubMed

    Ambade, Vipul Namdeorao; Keoliya, Ajay Narmadaprasad; Deokar, Ravindra Baliram; Dixit, Pradip Gangadhar

    2011-04-01

    One of the classic mistakes in forensic pathology is to regard the autopsy of decomposed body as unrewarding. The present study was undertaken with a view to debunk this myth and to determine the characteristic pattern in decomposed bodies brought for medicolegal autopsy. From a total of 4997 medicolegal deaths reported at an Apex Medical Centre, Yeotmal, a rural district of Maharashtra over seven year study period, only 180 cases were decomposed, representing 3.6% of the total medicolegal autopsies with the rate of 1.5 decomposed body/100,000 population per year. Male (79.4%) predominance was seen in decomposed bodies with male female ratio of 3.9:1. Most of the victims were between the ages of 31 and 60 years with peak at 31-40 years (26.7%) followed by 41-50 years (19.4%). Older age above 60 years was found in 8.6% cases. Married (64.4%) outnumbered unmarried ones in decomposition. Most of the decomposed bodies were complete (83.9%) and identified (75%). But when the body was incomplete/mutilated or skeletonised then 57.7% of the deceased remains unidentified. The cause and manner of death was ascertained in 85.6% and 81.1% cases respectively. Drowning (35.6%) was the commonest cause of death in decomposed bodies with suicide (52.8%) as the commonest manner of death. Decomposed bodies were commonly recovered from open places (43.9%), followed by water sources (43.3%) and enclosed place (12.2%). Most of the decomposed bodies were retrieved from well (49 cases) followed by barren land (27 cases) and forest (17 cases). 83.8% of the decomposed bodies were recovered before 72 h and only in 16.2% cases the time since death was more than 72 h, mostly recovered from barren land, forest and river. Most of the decomposed bodies were found in summer season (42.8%) with peak in the month of May. Despite technical difficulties in handling the body and artefactual alteration of the tissue, the decomposed body may still reveal cause and manner of death in significant number of cases. Copyright © 2011 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  17. Quantitative Diagnosis of Continuous-Valued, Stead-State Systems

    NASA Technical Reports Server (NTRS)

    Rouquette, N.

    1995-01-01

    Quantitative diagnosis involves numerically estimating the values of unobservable parameters that best explain the observed parameter values. We consider quantitative diagnosis for continuous, lumped- parameter, steady-state physical systems because such models are easy to construct and the diagnosis problem is considerably simpler than that for corresponding dynamic models. To further tackle the difficulties of numerically inverting a simulation model to compute a diagnosis, we propose to decompose a physical system model in terms of feedback loops. This decomposition reduces the dimension of the problem and consequently decreases the diagnosis search space. We illustrate this approach on a model of thermal control system studied in earlier research.

  18. The random fractional matching problem

    NASA Astrophysics Data System (ADS)

    Lucibello, Carlo; Malatesta, Enrico M.; Parisi, Giorgio; Sicuro, Gabriele

    2018-05-01

    We consider two formulations of the random-link fractional matching problem, a relaxed version of the more standard random-link (integer) matching problem. In one formulation, we allow each node to be linked to itself in the optimal matching configuration. In the other one, on the contrary, such a link is forbidden. Both problems have the same asymptotic average optimal cost of the random-link matching problem on the complete graph. Using a replica approach and previous results of Wästlund (2010 Acta Mathematica 204 91–150), we analytically derive the finite-size corrections to the asymptotic optimal cost. We compare our results with numerical simulations and we discuss the main differences between random-link fractional matching problems and the random-link matching problem.

  19. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising ferromagnet is studied, which is especially useful since it serves as a prototype for more complicated disordered systems such as the random field Ising model and spin glasses. We investigate the effect that changing boundary spins has on the locations of domain walls in the interior of the random ferromagnet system. We provide an analytic proof that ground state domain walls in the two dimensional system are decomposable, and we map these domain walls to a shortest paths problem. By implementing a multiple-source shortest paths algorithm developed by Philip Klein, we are able to efficiently probe domain wall locations for all possible configurations of boundary spins. We consider lattices with uncorrelated dis- order, as well as disorder that is spatially correlated according to a power law. We present numerical results for the scaling exponent governing the probability that a domain wall can be induced that passes through a particular location in the system's interior, and we compare these results to previous results on the directed polymer problem.

  20. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    PubMed

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.

  1. A hierarchy of generalized Jaulent-Miodek equations and their explicit solutions

    NASA Astrophysics Data System (ADS)

    Geng, Xianguo; Guan, Liang; Xue, Bo

    A hierarchy of generalized Jaulent-Miodek (JM) equations related to a new spectral problem with energy-dependent potentials is proposed. Depending on the Lax matrix and elliptic variables, the generalized JM hierarchy is decomposed into two systems of solvable ordinary differential equations. Explicit theta function representations of the meromorphic function and the Baker-Akhiezer function are constructed, the solutions of the hierarchy are obtained based on the theory of algebraic curves.

  2. Quality improvement of diagnosis of the electromyography data based on statistical characteristics of the measured signals

    NASA Astrophysics Data System (ADS)

    Selivanova, Karina G.; Avrunin, Oleg G.; Zlepko, Sergii M.; Romanyuk, Sergii O.; Zabolotna, Natalia I.; Kotyra, Andrzej; Komada, Paweł; Smailova, Saule

    2016-09-01

    Research and systematization of motor disorders, taking into account the clinical and neurophysiologic phenomena, are important and actual problem of neurology. The article describes a technique for decomposing surface electromyography (EMG), using Principal Component Analysis. The decomposition is achieved by a set of algorithms that uses a specially developed for analyze EMG. The accuracy was verified by calculation of Mahalanobis distance and Probability error.

  3. A system decomposition approach to the design of functional observers

    NASA Astrophysics Data System (ADS)

    Fernando, Tyrone; Trinh, Hieu

    2014-09-01

    This paper reports a system decomposition that allows the construction of a minimum-order functional observer using a state observer design approach. The system decomposition translates the functional observer design problem to that of a state observer for a smaller decomposed subsystem. Functional observability indices are introduced, and a closed-form expression for the minimum order required for a functional observer is derived in terms of those functional observability indices.

  4. Random deflections of a string on an elastic foundation.

    NASA Technical Reports Server (NTRS)

    Sanders, J. L., Jr.

    1972-01-01

    The paper is concerned with the problem of a taut string on a random elastic foundation subjected to random loads. The boundary value problem is transformed into an initial value problem by the method of invariant imbedding. Fokker-Planck equations for the random initial value problem are formulated and solved in some special cases. The analysis leads to a complete characterization of the random deflection function.

  5. Modeling Women's Menstrual Cycles using PICI Gates in Bayesian Network.

    PubMed

    Zagorecki, Adam; Łupińska-Dubicka, Anna; Voortman, Mark; Druzdzel, Marek J

    2016-03-01

    A major difficulty in building Bayesian network (BN) models is the size of conditional probability tables, which grow exponentially in the number of parents. One way of dealing with this problem is through parametric conditional probability distributions that usually require only a number of parameters that is linear in the number of parents. In this paper, we introduce a new class of parametric models, the Probabilistic Independence of Causal Influences (PICI) models, that aim at lowering the number of parameters required to specify local probability distributions, but are still capable of efficiently modeling a variety of interactions. A subset of PICI models is decomposable and this leads to significantly faster inference as compared to models that cannot be decomposed. We present an application of the proposed method to learning dynamic BNs for modeling a woman's menstrual cycle. We show that PICI models are especially useful for parameter learning from small data sets and lead to higher parameter accuracy than when learning CPTs.

  6. A facile self-assembly approach to prepare palladium/carbon nanotubes catalyst for the electro-oxidation of ethanol

    NASA Astrophysics Data System (ADS)

    Wen, Cuilian; Zhang, Xinyuan; Wei, Ying; Zhang, Teng; Chen, Changxin

    2018-02-01

    A facile self-assembly approach is reported to prepare palladium/carbon nanotubes (Pd/CNTs) catalyst for the electro-oxidation of ethanol. In this method, the Pd-oleate/CNTs was decomposed into the Pd/CNTs at an optimal temperature of 195 °C in air, in which no inert gas is needed for the thermal decomposition process due to the low temperature used and the decomposed products are also environmental friendly. The prepared Pd/CNTs catalyst has a high metallic Pd0 content and the Pd particles in the catalyst are disperse, uniform-sized with an average size of ˜2.1 nm, and evenly distributed on the CNTs. By employing our strategy, the problems including the exfoliation of the metal particles from the CNTs and the aggregation of the metal particles can be solved. Comparing with the commercial Pd/C one, the prepared Pd/CNTs catalyst exhibits a much higher electrochemical activity and stability for the electro-oxidation of ethanol in the direct ethanol fuel cells.

  7. Using "big data" to optimally model hydrology and water quality across expansive regions

    USGS Publications Warehouse

    Roehl, E.A.; Cook, J.B.; Conrads, P.A.

    2009-01-01

    This paper describes a new divide and conquer approach that leverages big environmental data, utilizing all available categorical and time-series data without subjectivity, to empirically model hydrologic and water-quality behaviors across expansive regions. The approach decomposes large, intractable problems into smaller ones that are optimally solved; decomposes complex signals into behavioral components that are easier to model with "sub- models"; and employs a sequence of numerically optimizing algorithms that include time-series clustering, nonlinear, multivariate sensitivity analysis and predictive modeling using multi-layer perceptron artificial neural networks, and classification for selecting the best sub-models to make predictions at new sites. This approach has many advantages over traditional modeling approaches, including being faster and less expensive, more comprehensive in its use of available data, and more accurate in representing a system's physical processes. This paper describes the application of the approach to model groundwater levels in Florida, stream temperatures across Western Oregon and Wisconsin, and water depths in the Florida Everglades. ?? 2009 ASCE.

  8. New evidence favoring multilevel decomposition and optimization

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Polignone, Debra A.

    1990-01-01

    The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.

  9. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  10. ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations

    NASA Astrophysics Data System (ADS)

    Merkel, M.; Niyonzima, I.; Schöps, S.

    2017-12-01

    Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.

  11. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  12. Modular representation of layered neural networks.

    PubMed

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-11-01

    This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.

  14. Reduced Toxicity Fuel Satellite Propulsion System

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J. (Inventor)

    2001-01-01

    A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster, whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.

  15. Reduced Toxicity Fuel Satellite Propulsion System Including Plasmatron

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J. (Inventor)

    2003-01-01

    A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster. whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.

  16. Allelopathy effect of rice straw on the germination and growth of Echinochloa crus-galli (L.) P. Beauv

    NASA Astrophysics Data System (ADS)

    Anuar, Fitryana Dewi Khairul; Ismail B., S.; Ahmad, Wan Juliana Wan

    2015-09-01

    A study on the effect of extract and decomposing rice straw of MR220 CL2, MR253 and MR263 on the germination and seedling growth of Echinochloa crus-galli has been conducted in the laboratory and greenhouse of Universiti Kebangsaan Malaysia. Three concentrations of aqueous extract (25, 50 and 100 g L-1) and decomposing rice straw (5, 10 and 15 g 500g-1) were used in the experiment. The experimental design used was the Complete Randomized Design (CRD) to evaluate the allelopathic effect of various concentrations of rice straw on various growth parameters of the test plants. All the experiments were carried out in three replications and conducted twice. Results showed that the rice straw extract of all the varieties showed significant effects on the germination and seedling growth of E. crus-galli. Aqueous extract of MR263 showed the greatest reduction on the germination of E.crus-galli compared to the other varieties at 100 g L-1 concentration (26% as compared to control). As the extract concentration of rice straw increased, the radicle length of E. crus-galli was significantly reduced. The radicle and hypocotyl length of E. crus-galli was significantly inhibited by 82.28% and 41.13% respectively at 100 g L-1 concentration of the aqueous extract of MR263. Decomposing rice straw of all rice varieties inhibited germination and all the growth parameters of the test plants. As the concentration of rice debris increased, the radicle length of the test plant decreased for all treatments. Decomposing rice straw of MR220 CL2 showed the greatest inhibitory effect on the growth of E. crus-galli compared to the other varieties. It inhibited the radicle, hypocotyl, fresh and dry weight of the test plants by 63.29%, 62.61%, 83.68% and 82.49% respectively as compared to the control. Therefore, rice straw of MR220 CL2, MR253 and MR263 showed allelopathic characteristics as they inhibited the germination and various growth parameters of E. crus-galli. However, further studies need to be conducted to determine the mode of action of the allelochemicals involved in rice allelopathy.

  17. Experimental evidence that the Ornstein-Uhlenbeck model best describes the evolution of leaf litter decomposability.

    PubMed

    Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K

    2014-09-01

    Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf 'afterlife' integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence.

  18. Using Volunteer Computing to Study Some Features of Diagonal Latin Squares

    NASA Astrophysics Data System (ADS)

    Vatutin, Eduard; Zaikin, Oleg; Kochemazov, Stepan; Valyaev, Sergey

    2017-12-01

    In this research, the study concerns around several features of diagonal Latin squares (DLSs) of small order. Authors of the study suggest an algorithm for computing minimal and maximal numbers of transversals of DLSs. According to this algorithm, all DLSs of a particular order are generated, and for each square all its transversals and diagonal transversals are constructed. The algorithm was implemented and applied to DLSs of order at most 7 on a personal computer. The experiment for order 8 was performed in the volunteer computing project Gerasim@home. In addition, the problem of finding pairs of orthogonal DLSs of order 10 was considered and reduced to Boolean satisfiability problem. The obtained problem turned out to be very hard, therefore it was decomposed into a family of subproblems. In order to solve the problem, the volunteer computing project SAT@home was used. As a result, several dozen pairs of described kind were found.

  19. Characterizing the Fundamental Intellectual Steps Required in the Solution of Conceptual Problems

    NASA Astrophysics Data System (ADS)

    Stewart, John

    2010-02-01

    At some level, the performance of a science class must depend on what is taught, the information content of the materials and assignments of the course. The introductory calculus-based electricity and magnetism class at the University of Arkansas is examined using a catalog of the basic reasoning steps involved in the solution of problems assigned in the class. This catalog was developed by sampling popular physics textbooks for conceptual problems. The solution to each conceptual problem was decomposed into its fundamental reasoning steps. These fundamental steps are, then, used to quantify the distribution of conceptual content within the course. Using this characterization technique, an exceptionally detailed picture of the information flow and structure of the class can be produced. The intellectual structure of published conceptual inventories is compared with the information presented in the class and the dependence of conceptual performance on the details of coverage extracted. )

  20. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  1. The reactive bed plasma system for contamination control

    NASA Technical Reports Server (NTRS)

    Birmingham, Joseph G.; Moore, Robert R.; Perry, Tony R.

    1990-01-01

    The contamination control capabilities of the Reactive Bed Plasma (RBP) system is described by delineating the results of toxic chemical composition studies, aerosol filtration work, and other testing. The RBP system has demonstrated its capabilities to decompose toxic materials and process hazardous aerosols. The post-treatment requirements for the reaction products have possible solutions. Although additional work is required to meet NASA requirements, the RBP may be able to meet contamination control problems aboard the Space Station.

  2. Identification of the Radiative and Nonradiative Parts of a Wave Field

    NASA Astrophysics Data System (ADS)

    Hoenders, B. J.; Ferwerda, H. A.

    2001-08-01

    We present a method for decomposing a wave field, described by a second-order ordinary differential equation, into a radiative component and a nonradiative one, using a biorthonormal system related to the problem under consideration. We show that it is possible to select a special system such that the wave field is purely radiating. We discuss the differences and analogies with approaches which, unlike our approach, start from the corresponding sources of the field.

  3. Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

    NASA Technical Reports Server (NTRS)

    Rios, Joseph Lucio; Ross, Kevin

    2009-01-01

    Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.

  4. A TV-constrained decomposition method for spectral CT

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  5. Human reinforcement learning subdivides structured action spaces by learning effector-specific values

    PubMed Central

    Gershman, Samuel J.; Pesaran, Bijan; Daw, Nathaniel D.

    2009-01-01

    Humans and animals are endowed with a large number of effectors. Although this enables great behavioral flexibility, it presents an equally formidable reinforcement learning problem of discovering which actions are most valuable, due to the high dimensionality of the action space. An unresolved question is how neural systems for reinforcement learning – such as prediction error signals for action valuation associated with dopamine and the striatum – can cope with this “curse of dimensionality.” We propose a reinforcement learning framework that allows for learned action valuations to be decomposed into effector-specific components when appropriate to a task, and test it by studying to what extent human behavior and BOLD activity can exploit such a decomposition in a multieffector choice task. Subjects made simultaneous decisions with their left and right hands and received separate reward feedback for each hand movement. We found that choice behavior was better described by a learning model that decomposed the values of bimanual movements into separate values for each effector, rather than a traditional model that treated the bimanual actions as unitary with a single value. A decomposition of value into effector-specific components was also observed in value-related BOLD signaling, in the form of lateralized biases in striatal correlates of prediction error and anticipatory value correlates in the intraparietal sulcus. These results suggest that the human brain can use decomposed value representations to “divide and conquer” reinforcement learning over high-dimensional action spaces. PMID:19864565

  6. Sulfate Minerals: A Problem for the Detection of Organic Compounds on Mars?

    PubMed Central

    Watson, Jonathan S.; Najorka, Jens; Luong, Duy; Sephton, Mark A.

    2015-01-01

    Abstract The search for in situ organic matter on Mars involves encounters with minerals and requires an understanding of their influence on lander and rover experiments. Inorganic host materials can be helpful by aiding the preservation of organic compounds or unhelpful by causing the destruction of organic matter during thermal extraction steps. Perchlorates are recognized as confounding minerals for thermal degradation studies. On heating, perchlorates can decompose to produce oxygen, which then oxidizes organic matter. Other common minerals on Mars, such as sulfates, may also produce oxygen upon thermal decay, presenting an additional complication. Different sulfate species decompose within a large range of temperatures. We performed a series of experiments on a sample containing the ferric sulfate jarosite. The sulfate ions within jarosite break down from 500°C. Carbon dioxide detected during heating of the sample was attributed to oxidation of organic matter. A laboratory standard of ferric sulfate hydrate released sulfur dioxide from 550°C, and an oxygen peak was detected in the products. Calcium sulfate did not decompose below 1000°C. Oxygen released from sulfate minerals may have already affected organic compound detection during in situ thermal experiments on Mars missions. A combination of preliminary mineralogical analyses and suitably selected pyrolysis temperatures may increase future success in the search for past or present life on Mars. Key Words: Mars—Life detection—Geochemistry—Organic matter—Jarosite. Astrobiology 15, 247–258. PMID:25695727

  7. Human reinforcement learning subdivides structured action spaces by learning effector-specific values.

    PubMed

    Gershman, Samuel J; Pesaran, Bijan; Daw, Nathaniel D

    2009-10-28

    Humans and animals are endowed with a large number of effectors. Although this enables great behavioral flexibility, it presents an equally formidable reinforcement learning problem of discovering which actions are most valuable because of the high dimensionality of the action space. An unresolved question is how neural systems for reinforcement learning-such as prediction error signals for action valuation associated with dopamine and the striatum-can cope with this "curse of dimensionality." We propose a reinforcement learning framework that allows for learned action valuations to be decomposed into effector-specific components when appropriate to a task, and test it by studying to what extent human behavior and blood oxygen level-dependent (BOLD) activity can exploit such a decomposition in a multieffector choice task. Subjects made simultaneous decisions with their left and right hands and received separate reward feedback for each hand movement. We found that choice behavior was better described by a learning model that decomposed the values of bimanual movements into separate values for each effector, rather than a traditional model that treated the bimanual actions as unitary with a single value. A decomposition of value into effector-specific components was also observed in value-related BOLD signaling, in the form of lateralized biases in striatal correlates of prediction error and anticipatory value correlates in the intraparietal sulcus. These results suggest that the human brain can use decomposed value representations to "divide and conquer" reinforcement learning over high-dimensional action spaces.

  8. Hodge Decomposition of Information Flow on Small-World Networks.

    PubMed

    Haruna, Taichi; Fujiki, Yuuya

    2016-01-01

    We investigate the influence of the small-world topology on the composition of information flow on networks. By appealing to the combinatorial Hodge theory, we decompose information flow generated by random threshold networks on the Watts-Strogatz model into three components: gradient, harmonic and curl flows. The harmonic and curl flows represent globally circular and locally circular components, respectively. The Watts-Strogatz model bridges the two extreme network topologies, a lattice network and a random network, by a single parameter that is the probability of random rewiring. The small-world topology is realized within a certain range between them. By numerical simulation we found that as networks become more random the ratio of harmonic flow to the total magnitude of information flow increases whereas the ratio of curl flow decreases. Furthermore, both quantities are significantly enhanced from the level when only network structure is considered for the network close to a random network and a lattice network, respectively. Finally, the sum of these two ratios takes its maximum value within the small-world region. These findings suggest that the dynamical information counterpart of global integration and that of local segregation are the harmonic flow and the curl flow, respectively, and that a part of the small-world region is dominated by internal circulation of information flow.

  9. Reduced Toxicity Fuel Satellite Propulsion System Including Fuel Cell Reformer with Alcohols Such as Methanol

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J. (Inventor)

    2001-01-01

    A reduced toxicity fuel satellite propulsion system including a reduced toxicity propellant supply for consumption in an axial class thruster and an ACS class thruster. The system includes suitable valves and conduits for supplying the reduced toxicity propellant to the ACS decomposing element of an ACS thruster. The ACS decomposing element is operative to decompose the reduced toxicity propellant into hot propulsive gases. In addition the system includes suitable valves and conduits for supplying the reduced toxicity propellant to an axial decomposing element of the axial thruster. The axial decomposing element is operative to decompose the reduced toxicity propellant into hot gases. The system further includes suitable valves and conduits for supplying a second propellant to a combustion chamber of the axial thruster, whereby the hot gases and the second propellant auto-ignite and begin the combustion process for producing thrust.

  10. X-ray EM simulation tool for ptychography dataset construction

    NASA Astrophysics Data System (ADS)

    Stoevelaar, L. Pjotr; Gerini, Giampiero

    2018-03-01

    In this paper, we present an electromagnetic full-wave modeling framework, as a support EM tool providing data sets for X-ray ptychographic imaging. Modeling the entire scattering problem with Finite Element Method (FEM) tools is, in fact, a prohibitive task, because of the large area illuminated by the beam (due to the poor focusing power at these wavelengths) and the very small features to be imaged. To overcome this problem, the spectrum of the illumination beam is decomposed into a discrete set of plane waves. This allows reducing the electromagnetic modeling volume to the one enclosing the area to be imaged. The total scattered field is reconstructed by superimposing the solutions for each plane wave illumination.

  11. Experimental evidence that the Ornstein-Uhlenbeck model best describes the evolution of leaf litter decomposability

    PubMed Central

    Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K

    2014-01-01

    Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf ‘afterlife’ integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence. PMID:25535551

  12. Decomposition-based transfer distance metric learning for image classification.

    PubMed

    Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao

    2014-09-01

    Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.

  13. A new approach for solving seismic tomography problems and assessing the uncertainty through the use of graph theory and direct methods

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.; Davis, T. A.

    2016-12-01

    Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.

  14. Analysis on Vertical Scattering Signatures in Forestry with PolInSAR

    NASA Astrophysics Data System (ADS)

    Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen

    2014-11-01

    We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems.more » Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.« less

  16. Decomposing Multifractal Crossovers

    PubMed Central

    Nagy, Zoltan; Mukli, Peter; Herman, Peter; Eke, Andras

    2017-01-01

    Physiological processes—such as, the brain's resting-state electrical activity or hemodynamic fluctuations—exhibit scale-free temporal structuring. However, impacts common in biological systems such as, noise, multiple signal generators, or filtering by transport function, result in multimodal scaling that cannot be reliably assessed by standard analytical tools that assume unimodal scaling. Here, we present two methods to identify breakpoints or crossovers in multimodal multifractal scaling functions. These methods incorporate the robust iterative fitting approach of the focus-based multifractal formalism (FMF). The first approach (moment-wise scaling range adaptivity) allows for a breakpoint-based adaptive treatment that analyzes segregated scale-invariant ranges. The second method (scaling function decomposition method, SFD) is a crossover-based design aimed at decomposing signal constituents from multimodal scaling functions resulting from signal addition or co-sampling, such as, contamination by uncorrelated fractals. We demonstrated that these methods could handle multimodal, mono- or multifractal, and exact or empirical signals alike. Their precision was numerically characterized on ideal signals, and a robust performance was demonstrated on exemplary empirical signals capturing resting-state brain dynamics by near infrared spectroscopy (NIRS), electroencephalography (EEG), and blood oxygen level-dependent functional magnetic resonance imaging (fMRI-BOLD). The NIRS and fMRI-BOLD low-frequency fluctuations were dominated by a multifractal component over an underlying biologically relevant random noise, thus forming a bimodal signal. The crossover between the EEG signal components was found at the boundary between the δ and θ bands, suggesting an independent generator for the multifractal δ rhythm. The robust implementation of the SFD method should be regarded as essential in the seamless processing of large volumes of bimodal fMRI-BOLD imaging data for the topology of multifractal metrics free of the masking effect of the underlying random noise. PMID:28798694

  17. Motion Planning and Synthesis of Human-Like Characters in Constrained Environments

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjun; Pan, Jia; Manocha, Dinesh

    We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.

  18. An Automatic Orthonormalization Method for Solving Stiff Boundary-Value Problems

    NASA Astrophysics Data System (ADS)

    Davey, A.

    1983-08-01

    A new initial-value method is described, based on a remark by Drury, for solving stiff linear differential two-point cigenvalue and boundary-value problems. The method is extremely reliable, it is especially suitable for high-order differential systems, and it is capable of accommodating realms of stiffness which other methods cannot reach. The key idea behind the method is to decompose the stiff differential operator into two non-stiff operators, one of which is nonlinear. The nonlinear one is specially chosen so that it advances an orthonormal frame, indeed the method is essentially a kind of automatic orthonormalization; the second is auxiliary but it is needed to determine the required function. The usefulness of the method is demonstrated by calculating some eigenfunctions for an Orr-Sommerfeld problem when the Reynolds number is as large as 10°.

  19. Lac Qui Parle Flood Control Project Master Plan for Public Use Development and Resource Management.

    DTIC Science & Technology

    1980-08-01

    the project area is the disposal of dead carp. Minnesota fishing regulations prohibit fishermen from returning rough fish to lakes or rivers after...in trash cans. Unless the dead fish are removed virtually daily, they begin to decompose and smell. Due to current work- force constraints, the Corps...is unable to remove the dead fish as often as it would like. No easy solution to this problem is apparent. 6.25 Potential for Future Development The

  20. A Heuristic Fast Method to Solve the Nonlinear Schroedinger Equation in Fiber Bragg Gratings with Arbitrary Shape Input Pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emami, F.; Hatami, M.; Keshavarz, A. R.

    2009-08-13

    Using a combination of Runge-Kutta and Jacobi iterative method, we could solve the nonlinear Schroedinger equation describing the pulse propagation in FBGs. By decomposing the electric field to forward and backward components in fiber Bragg grating and utilizing the Fourier series analysis technique, the boundary value problem of a set of coupled equations governing the pulse propagation in FBG changes to an initial condition coupled equations which can be solved by simple Runge-Kutta method.

  1. Decomposition of the linking number of a closed ribbon: A problem from molecular biology

    PubMed Central

    Fuller, F. Brock

    1978-01-01

    A closed duplex DNA molecule relaxed and containing nucleosomes has a different linking number from the same molecule relaxed and without nucleosomes. What does this say about the structure of the nucleosome? A mathematical study of this question is made, representing the DNA molecule by a ribbon. It is shown that the linking number of a closed ribbon can be decomposed into the linking number of a reference ribbon plus a sum of locally determined “linking differences.” PMID:16592550

  2. Distributed Multi-Cell Resource Allocation with Price Based ICI Coordination in Downlink OFDMA Networks

    NASA Astrophysics Data System (ADS)

    Lv, Gangming; Zhu, Shihua; Hui, Hui

    Multi-cell resource allocation under minimum rate request for each user in OFDMA networks is addressed in this paper. Based on Lagrange dual decomposition theory, the joint multi-cell resource allocation problem is decomposed and modeled as a limited-cooperative game, and a distributed multi-cell resource allocation algorithm is thus proposed. Analysis and simulation results show that, compared with non-cooperative iterative water-filling algorithm, the proposed algorithm can remarkably reduce the ICI level and improve overall system performances.

  3. Northeast Artificial Intelligence Consortium (NAIC). Volume 15. Strategies for Coupling Symbolic and Numerical Computation in Knowledge Base Systems

    DTIC Science & Technology

    1990-12-01

    Implementation of Coupled System 18 15.4. CASE STUDIES & IMPLEMENTATION EXAMPLES 24 15.4.1. The Case Studies of Coupled System 24 15.4.2. Example: Coupled System...occurs during specific phases of the problem-solving process. By decomposing the coupling process into its component layers we effectively study the nature...by the qualitative model, appropriate mathematical model is invoked. 5) The results are verified. If successful, stop. Else go to (2) and use an

  4. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  5. PET-CT image fusion using random forest and à-trous wavelet transform.

    PubMed

    Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo

    2018-03-01

    New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Correlative weighted stacking for seismic data in the wavelet domain

    USGS Publications Warehouse

    Zhang, S.; Xu, Y.; Xia, J.; ,

    2004-01-01

    Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.

  7. Forensic entomology of decomposing humans and their decomposing pets.

    PubMed

    Sanford, Michelle R

    2015-02-01

    Domestic pets are commonly found in the homes of decedents whose deaths are investigated by a medical examiner or coroner. When these pets become trapped with a decomposing decedent they may resort to feeding on the body or succumb to starvation and/or dehydration and begin to decompose as well. In this case report photographic documentation of cases involving pets and decedents were examined from 2009 through the beginning of 2014. This photo review indicated that in many cases the pets were cats and dogs that were trapped with the decedent, died and were discovered in a moderate (bloat to active decay) state of decomposition. In addition three cases involving decomposing humans and their decomposing pets are described as they were processed for time of insect colonization by forensic entomological approach. Differences in timing and species colonizing the human and animal bodies were noted as was the potential for the human or animal derived specimens to contaminate one another at the scene. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Limited Effects of Variable-Retention Harvesting on Fungal Communities Decomposing Fine Roots in Coastal Temperate Rainforests.

    PubMed

    Philpott, Timothy J; Barker, Jason S; Prescott, Cindy E; Grayston, Sue J

    2018-02-01

    Fine root litter is the principal source of carbon stored in forest soils and a dominant source of carbon for fungal decomposers. Differences in decomposer capacity between fungal species may be important determinants of fine-root decomposition rates. Variable-retention harvesting (VRH) provides refuge for ectomycorrhizal fungi, but its influence on fine-root decomposers is unknown, as are the effects of functional shifts in these fungal communities on carbon cycling. We compared fungal communities decomposing fine roots (in litter bags) under VRH, clear-cut, and uncut stands at two sites (6 and 13 years postharvest) and two decay stages (43 days and 1 year after burial) in Douglas fir forests in coastal British Columbia, Canada. Fungal species and guilds were identified from decomposed fine roots using high-throughput sequencing. Variable retention had short-term effects on β-diversity; harvest treatment modified the fungal community composition at the 6-year-postharvest site, but not at the 13-year-postharvest site. Ericoid and ectomycorrhizal guilds were not more abundant under VRH, but stand age significantly structured species composition. Guild composition varied by decay stage, with ruderal species later replaced by saprotrophs and ectomycorrhizae. Ectomycorrhizal abundance on decomposing fine roots may partially explain why fine roots typically decompose more slowly than surface litter. Our results indicate that stand age structures fine-root decomposers but that decay stage is more important in structuring the fungal community than shifts caused by harvesting. The rapid postharvest recovery of fungal communities decomposing fine roots suggests resiliency within this community, at least in these young regenerating stands in coastal British Columbia. IMPORTANCE Globally, fine roots are a dominant source of carbon in forest soils, yet the fungi that decompose this material and that drive the sequestration or respiration of this carbon remain largely uncharacterized. Fungi vary in their capacity to decompose plant litter, suggesting that fungal community composition is an important determinant of decomposition rates. Variable-retention harvesting is a forestry practice that modifies fungal communities by providing refuge for ectomycorrhizal fungi. We evaluated the effects of variable retention and clear-cut harvesting on fungal communities decomposing fine roots at two sites (6 and 13 years postharvest), at two decay stages (43 days and 1 year), and in uncut stands in temperate rainforests. Harvesting impacts on fungal community composition were detected only after 6 years after harvest. We suggest that fungal community composition may be an important factor that reduces fine-root decomposition rates relative to those of above-ground plant litter, which has important consequences for forest carbon cycling. Copyright © 2018 American Society for Microbiology.

  9. Hydrogen production by the decomposition of water

    DOEpatents

    Hollabaugh, Charles M.; Bowman, Melvin G.

    1981-01-01

    How to produce hydrogen from water was a problem addressed by this invention. The solution employs a combined electrolytical-thermochemical sulfuric acid process. Additionally, high purity sulfuric acid can be produced in the process. Water and SO.sub.2 react in electrolyzer (12) so that hydrogen is produced at the cathode and sulfuric acid is produced at the anode. Then the sulfuric acid is reacted with a particular compound M.sub.r X.sub.s so as to form at least one water insoluble sulfate and at least one water insoluble oxide of molybdenum, tungsten, or boron. Water is removed by filtration; and the sulfate is decomposed in the presence of the oxide in sulfate decomposition zone (21), thus forming SO.sub.3 and reforming M.sub.r X.sub.s. The M.sub.r X.sub.s is recycled to sulfate formation zone (16). If desired, the SO.sub.3 can be decomposed to SO.sub.2 and O.sub.2 ; and the SO.sub.2 can be recycled to electrolyzer (12) to provide a cycle for producing hydrogen.

  10. Scheduling double round-robin tournaments with divisional play using constraint programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less

  11. "Going to town": Large-scale norming and statistical analysis of 870 American English idioms.

    PubMed

    Bulkes, Nyssa Z; Tanner, Darren

    2017-04-01

    An idiom is classically defined as a formulaic sequence whose meaning is comprised of more than the sum of its parts. For this reason, idioms pose a unique problem for models of sentence processing, as researchers must take into account how idioms vary and along what dimensions, as these factors can modulate the ease with which an idiomatic interpretation can be activated. In order to help ensure external validity and comparability across studies, idiom research benefits from the availability of publicly available resources reporting ratings from a large number of native speakers. Resources such as the one outlined in the current paper facilitate opportunities for consensus across studies on idiom processing and help to further our goals as a research community. To this end, descriptive norms were obtained for 870 American English idioms from 2,100 participants along five dimensions: familiarity, meaningfulness, literal plausibility, global decomposability, and predictability. Idiom familiarity and meaningfulness strongly correlated with one another, whereas familiarity and meaningfulness were positively correlated with both global decomposability and predictability. Correlations with previous norming studies are also discussed.

  12. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  13. Natural 13C abundance reveals trophic status of fungi and host-origin of carbon in mycorrhizal fungi in mixed forests

    PubMed Central

    Högberg, Peter; Plamboeck, Agneta H.; Taylor, Andrew F. S.; Fransson, Petra M. A.

    1999-01-01

    Fungi play crucial roles in the biogeochemistry of terrestrial ecosystems, most notably as saprophytes decomposing organic matter and as mycorrhizal fungi enhancing plant nutrient uptake. However, a recurrent problem in fungal ecology is to establish the trophic status of species in the field. Our interpretations and conclusions are too often based on extrapolations from laboratory microcosm experiments or on anecdotal field evidence. Here, we used natural variations in stable carbon isotope ratios (δ13C) as an approach to distinguish between fungal decomposers and symbiotic mycorrhizal fungal species in the rich sporocarp flora (our sample contains 135 species) of temperate forests. We also demonstrated that host-specific mycorrhizal fungi that receive C from overstorey or understorey tree species differ in their δ13C. The many promiscuous mycorrhizal fungi, associated with and connecting several tree hosts, were calculated to receive 57–100% of their C from overstorey trees. Thus, overstorey trees also support, partly or wholly, the nutrient-absorbing mycelia of their alleged competitors, the understorey trees. PMID:10411910

  14. Our World without Decomposers: How Scary!

    ERIC Educational Resources Information Center

    Spring, Patty; Harr, Natalie

    2014-01-01

    Bugs, slugs, bacteria, and fungi are decomposers at the heart of every ecosystem. Fifth graders at Dodge Intermediate School in Twinsburg, Ohio, ventured outdoors to learn about the necessity of these amazing organisms. With the help of a naturalist, students explored their local park and discovered the wonder of decomposers and their…

  15. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  16. Incipient fault feature extraction of rolling bearings based on the MVMD and Teager energy operator.

    PubMed

    Ma, Jun; Wu, Jiande; Wang, Xiaodong

    2018-06-04

    Aiming at the problems that the incipient fault of rolling bearings is difficult to recognize and the number of intrinsic mode functions (IMFs) decomposed by variational mode decomposition (VMD) must be set in advance and can not be adaptively selected, taking full advantages of the adaptive segmentation of scale spectrum and Teager energy operator (TEO) demodulation, a new method for early fault feature extraction of rolling bearings based on the modified VMD and Teager energy operator (MVMD-TEO) is proposed. Firstly, the vibration signal of rolling bearings is analyzed by adaptive scale space spectrum segmentation to obtain the spectrum segmentation support boundary, and then the number K of IMFs decomposed by VMD is adaptively determined. Secondly, the original vibration signal is adaptively decomposed into K IMFs, and the effective IMF components are extracted based on the correlation coefficient criterion. Finally, the Teager energy spectrum of the reconstructed signal of the effective IMF components is calculated by the TEO, and then the early fault features of rolling bearings are extracted to realize the fault identification and location. Comparative experiments of the proposed method and the existing fault feature extraction method based on Local Mean Decomposition and Teager energy operator (LMD-TEO) have been implemented using experimental data-sets and a measured data-set. The results of comparative experiments in three application cases show that the presented method can achieve a fairly or slightly better performance than LMD-TEO method, and the validity and feasibility of the proposed method are proved. Copyright © 2018. Published by Elsevier Ltd.

  17. Discretized energy minimization in a wave guide with point sources

    NASA Technical Reports Server (NTRS)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  18. Hierarchical coarse-graining transform.

    PubMed

    Pancaldi, Vera; King, Peter R; Christensen, Kim

    2009-03-01

    We present a hierarchical transform that can be applied to Laplace-like differential equations such as Darcy's equation for single-phase flow in a porous medium. A finite-difference discretization scheme is used to set the equation in the form of an eigenvalue problem. Within the formalism suggested, the pressure field is decomposed into an average value and fluctuations of different kinds and at different scales. The application of the transform to the equation allows us to calculate the unknown pressure with a varying level of detail. A procedure is suggested to localize important features in the pressure field based only on the fine-scale permeability, and hence we develop a form of adaptive coarse graining. The formalism and method are described and demonstrated using two synthetic toy problems.

  19. Task decomposition for a multilimbed robot to work in reachable but unorientable space

    NASA Technical Reports Server (NTRS)

    Su, Chau; Zheng, Yuan F.

    1991-01-01

    Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.

  20. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    PubMed

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system throughput performance.

  1. Traits determining the digestibility-decomposability relationships in species from Mediterranean rangelands.

    PubMed

    Bumb, Iris; Garnier, Eric; Coq, Sylvain; Nahmani, Johanne; Del Rey Granado, Maria; Gimenez, Olivier; Kazakou, Elena

    2018-03-05

    Forage quality for herbivores and litter quality for decomposers are two key plant properties affecting ecosystem carbon and nutrient cycling. Although there is a positive relationship between palatability and decomposition, very few studies have focused on larger vertebrate herbivores while considering links between the digestibility of living leaves and stems and the decomposability of litter and associated traits. The hypothesis tested is that some defences of living organs would reduce their digestibility and, as a consequence, their litter decomposability, through 'afterlife' effects. Additionally in high-fertility conditions the presence of intense herbivory would select for communities dominated by fast-growing plants, which are able to compensate for tissue loss by herbivory, producing both highly digestible organs and easily decomposable litter. Relationships between dry matter digestibility and decomposability were quantified in 16 dominant species from Mediterranean rangelands, which are subject to management regimes that differ in grazing intensity and fertilization. The digestibility and decomposability of leaves and stems were estimated at peak standing biomass, in plots that were either fertilized and intensively grazed or unfertilized and moderately grazed. Several traits were measured on living and senesced organs: fibre content, dry matter content and nitrogen, phosphorus and tannin concentrations. Digestibility was positively related to decomposability, both properties being influenced in the same direction by management regime, organ and growth forms. Digestibility of leaves and stems was negatively related to their fibre concentrations, and positively related to their nitrogen concentration. Decomposability was more strongly related to traits measured on living organs than on litter. Digestibility and decomposition were governed by similar structural traits, in particular fibre concentration, affecting both herbivores and micro-organisms through the afterlife effects. This study contributes to a better understanding of the interspecific relationships between forage quality and litter decomposition in leaves and stems and demonstrates the key role these traits play in the link between plant and soil via herbivory and decomposition. Fibre concentration and dry matter content can be considered as good predictors of both digestibility and decomposability. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Integrated boiler, superheater, and decomposer for sulfuric acid decomposition

    DOEpatents

    Moore, Robert [Edgewood, NM; Pickard, Paul S [Albuquerque, NM; Parma, Jr., Edward J.; Vernon, Milton E [Albuquerque, NM; Gelbard, Fred [Albuquerque, NM; Lenard, Roger X [Edgewood, NM

    2010-01-12

    A method and apparatus, constructed of ceramics and other corrosion resistant materials, for decomposing sulfuric acid into sulfur dioxide, oxygen and water using an integrated boiler, superheater, and decomposer unit comprising a bayonet-type, dual-tube, counter-flow heat exchanger with a catalytic insert and a central baffle to increase recuperation efficiency.

  3. Procedures for Decomposing a Redox Reaction into Half-Reaction

    ERIC Educational Resources Information Center

    Fishtik, Ilie; Berka, Ladislav H.

    2005-01-01

    A simple algorithm for a complete enumeration of the possible ways a redox reaction (RR) might be uniquely decomposed into half-reactions (HRs) using the response reactions (RERs) formalism is presented. A complete enumeration of the possible ways a RR may be decomposed into HRs is equivalent to a complete enumeration of stoichiometrically…

  4. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  5. A mesh gradient technique for numerical optimization

    NASA Technical Reports Server (NTRS)

    Willis, E. A., Jr.

    1973-01-01

    A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.

  6. Competitive Facility Location with Fuzzy Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2010-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops, with uncertainty and vagueness including demands for the facilities in a plane. By representing the demands for facilities as fuzzy random variables, the location problem can be formulated as a fuzzy random programming problem. For solving the fuzzy random programming problem, first the α-level sets for fuzzy numbers are used for transforming it to a stochastic programming problem, and secondly, by using their expectations and variances, it can be reformulated to a deterministic programming problem. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic oscillation. The efficiency of the proposed method is shown by applying it to numerical examples of the facility location problems.

  7. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  8. A linear decomposition method for large optimization problems. Blueprint for development

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1982-01-01

    A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.

  9. Coordinated Platoon Routing in a Metropolitan Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Jeffrey; Munson, Todd; Sokolov, Vadim

    2016-10-10

    Platooning vehicles—connected and automated vehicles traveling with small intervehicle distances—use less fuel because of reduced aerodynamic drag. Given a network de- fined by vertex and edge sets and a set of vehicles with origin/destination nodes/times, we model and solve the combinatorial optimization problem of coordinated routing of vehicles in a manner that routes them to their destination on time while using the least amount of fuel. Common approaches decompose the platoon coordination and vehicle routing into separate problems. Our model addresses both problems simultaneously to obtain the best solution. We use modern modeling techniques and constraints implied from analyzing themore » platoon routing problem to address larger numbers of vehicles and larger networks than previously considered. While the numerical method used is unable to certify optimality for candidate solutions to all networks and parameters considered, we obtain excellent solutions in approximately one minute for much larger networks and vehicle sets than previously considered in the literature.« less

  10. System for thermochemical hydrogen production

    DOEpatents

    Werner, R.W.; Galloway, T.R.; Krikorian, O.H.

    1981-05-22

    Method and apparatus are described for joule boosting a SO/sub 3/ decomposer using electrical instead of thermal energy to heat the reactants of the high temperature SO/sub 3/ decomposition step of a thermochemical hydrogen production process driven by a tandem mirror reactor. Joule boosting the decomposer to a sufficiently high temperature from a lower temperature heat source eliminates the need for expensive catalysts and reduces the temperature and consequent materials requirements for the reactor blanket. A particular decomposer design utilizes electrically heated silicon carbide rods, at a temperature of 1250/sup 0/K, to decompose a cross flow of SO/sub 3/ gas.

  11. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  12. Bi-spectrum based-EMD applied to the non-stationary vibration signals for bearing faults diagnosis.

    PubMed

    Saidi, Lotfi; Ali, Jaouher Ben; Fnaiech, Farhat

    2014-09-01

    Empirical mode decomposition (EMD) has been widely applied to analyze vibration signals behavior for bearing failures detection. Vibration signals are almost always non-stationary since bearings are inherently dynamic (e.g., speed and load condition change over time). By using EMD, the complicated non-stationary vibration signal is decomposed into a number of stationary intrinsic mode functions (IMFs) based on the local characteristic time scale of the signal. Bi-spectrum, a third-order statistic, helps to identify phase coupling effects, the bi-spectrum is theoretically zero for Gaussian noise and it is flat for non-Gaussian white noise, consequently the bi-spectrum analysis is insensitive to random noise, which are useful for detecting faults in induction machines. Utilizing the advantages of EMD and bi-spectrum, this article proposes a joint method for detecting such faults, called bi-spectrum based EMD (BSEMD). First, original vibration signals collected from accelerometers are decomposed by EMD and a set of IMFs is produced. Then, the IMF signals are analyzed via bi-spectrum to detect outer race bearing defects. The procedure is illustrated with the experimental bearing vibration data. The experimental results show that BSEMD techniques can effectively diagnosis bearing failures. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  13. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  14. Automatic detection of apical roots in oral radiographs

    NASA Astrophysics Data System (ADS)

    Wu, Yi; Xie, Fangfang; Yang, Jie; Cheng, Erkang; Megalooikonomou, Vasileios; Ling, Haibin

    2012-03-01

    The apical root regions play an important role in analysis and diagnosis of many oral diseases. Automatic detection of such regions is consequently the first step toward computer-aided diagnosis of these diseases. In this paper we propose an automatic method for periapical root region detection by using the state-of-theart machine learning approaches. Specifically, we have adapted the AdaBoost classifier for apical root detection. One challenge in the task is the lack of training cases especially for diseased ones. To handle this problem, we boost the training set by including more root regions that are close to the annotated ones and decompose the original images to randomly generate negative samples. Based on these training samples, the Adaboost algorithm in combination with Haar wavelets is utilized in this task to train an apical root detector. The learned detector usually generates a large amount of true and false positives. In order to reduce the number of false positives, a confidence score for each candidate detection result is calculated for further purification. We first merge the detected regions by combining tightly overlapped detected candidate regions and then we use the confidence scores from the Adaboost detector to eliminate the false positives. The proposed method is evaluated on a dataset containing 39 annotated digitized oral X-Ray images from 21 patients. The experimental results show that our approach can achieve promising detection accuracy.

  15. SNR enhancement for downhole microseismic data based on scale classification shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili

    2018-06-01

    Shearlet transform (ST) can be effective in 2D signal processing, due to its parabolic scaling, high directional sensitivity, and optimal sparsity. ST combined with thresholding has been successfully applied to suppress random noise. However, because of the low magnitude and high frequency of a downhole microseismic signal, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is difficult to use for denoising. In this paper, we present a scale classification ST to solve this problem. The ST is used to decompose noisy microseismic data into serval scales. By analyzing the spectrum and energy distribution of the shearlet coefficients of microseismic data, we divide the scales into two types: low-frequency scales which contain less useful signal and high-frequency scales which contain more useful signal. After classification, we use two different methods to deal with the coefficients on different scales. For the low-frequency scales, the noise is attenuated using a thresholding method. As for the high-frequency scales, we propose to use a generalized Gauss distribution model based a non-local means filter, which takes advantage of the temporal and spatial similarity of microseismic data. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.

  16. From nonlinear Schrödinger hierarchy to some (2+1)-dimensional nonlinear pseudodifferential equations

    NASA Astrophysics Data System (ADS)

    Yang, Xiao; Du, Dianlou

    2010-08-01

    The Poisson structure on CN×RN is introduced to give the Hamiltonian system associated with a spectral problem which yields the nonlinear Schrödinger (NLS) hierarchy. The Hamiltonian system is proven to be Liouville integrable. Some (2+1)-dimensional equations including NLS equation, Kadomtesev-Petviashvili I (KPI) equation, coupled KPI equation, and modified Kadomtesev-Petviashvili (mKP) equation, are decomposed into Hamilton flows via the NLS hierarchy. The algebraic curve, Abel-Jacobi coordinates, and Riemann-Jacobi inversion are used to obtain the algebrogeometric solutions of these equations.

  17. Earth observations taken by the Expedition Seven crew

    NASA Image and Video Library

    2003-10-11

    ISS007-E-17038 (11 October 2003) --- This view featuring a close-up of the Salton Sea was taken by an Expedition 7 crewmember onboard the International Space Station (ISS). The image provides detail of the structure of the algal bloom. These blooms continue to be a problem for the Salton Sea. They are caused by high concentrations of nutrients, especially nitrogen and phosphorous, which drain into the basin from the agricultural run-off. As the algae die and decompose, oxygen levels in the sea drop, causing fish kills and hazardous condition for other wildlife.

  18. Crossing symmetry in alpha space

    NASA Astrophysics Data System (ADS)

    Hogervorst, Matthijs; van Rees, Balt C.

    2017-11-01

    We initiate the study of the conformal bootstrap using Sturm-Liouville theory, specializing to four-point functions in one-dimensional CFTs. We do so by decomposing conformal correlators using a basis of eigenfunctions of the Casimir which are labeled by a complex number α. This leads to a systematic method for computing conformal block decompositions. Analyzing bootstrap equations in alpha space turns crossing symmetry into an eigenvalue problem for an integral operator K. The operator K is closely related to the Wilson transform, and some of its eigenfunctions can be found in closed form.

  19. Research on Computer Aided Innovation Model of Weapon Equipment Requirement Demonstration

    NASA Astrophysics Data System (ADS)

    Li, Yong; Guo, Qisheng; Wang, Rui; Li, Liang

    Firstly, in order to overcome the shortcoming of using only AD or TRIZ solely, and solve the problems currently existed in weapon equipment requirement demonstration, the paper construct the method system of weapon equipment requirement demonstration combining QFD, AD, TRIZ, FA. Then, we construct a CAI model frame of weapon equipment requirement demonstration, which include requirement decomposed model, requirement mapping model and requirement plan optimization model. Finally, we construct the computer aided innovation model of weapon equipment requirement demonstration, and developed CAI software of equipment requirement demonstration.

  20. Process for converting magnesium fluoride to calcium fluoride

    DOEpatents

    Kreuzmann, A.B.; Palmer, D.A.

    1984-12-21

    This invention is a process for the conversion of magnesium fluoride to calcium fluoride whereby magnesium fluoride is decomposed by heating in the presence of calcium carbonate, calcium oxide or calcium hydroxide. Magnesium fluoride is a by-product of the reduction of uranium tetrafluoride to form uranium metal and has no known commercial use, thus its production creates a significant storage problem. The advantage of this invention is that the quality of calcium fluoride produced is sufficient to be used in the industrial manufacture of anhydrous hydrogen fluoride, steel mill flux or ceramic applications.

  1. Do Nonnative Language Speakers "Chew the Fat" and "Spill the Beans" with Different Brain Hemispheres? Investigating Idiom Decomposability with the Divided Visual Field Paradigm

    ERIC Educational Resources Information Center

    Cieslicka, Anna B.

    2013-01-01

    The purpose of this study was to explore possible cerebral asymmetries in the processing of decomposable and nondecomposable idioms by fluent nonnative speakers of English. In the study, native language (Polish) and foreign language (English) decomposable and nondecomposable idioms were embedded in ambiguous (neutral) and unambiguous (biasing…

  2. Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.

    PubMed

    Nallaperuma, Samadhi; Neumann, Frank; Sudholt, Dirk

    2017-01-01

    Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to our theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed-time budget. We follow this approach and present a fixed-budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson Problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed-time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1+1) EA and (1+[Formula: see text]) EA algorithms for the TSP in a smoothed complexity setting, and derive the lower bounds of the expected fitness gain for a specified number of generations.

  3. Decomposing potassium peroxychromate produces hydroxyl radical (.OH) that can peroxidize the unsaturated fatty acids of phospholipid dispersions.

    PubMed

    Edwards, J C; Quinn, P J

    1982-09-01

    The unsaturated fatty acyl residues of egg yolk lecithin are selectively removed when bilayer dispersions of the lipid are exposed to decomposing peroxychromate at pH 7.6 or pH 9.0. Mannitol (50 mM or 100 mM)partially prevents the oxidation of the phospholipid due to decomposing peroxychromate at pH 7.6 and the amount of lipid lost is inversely proportional to the concentration of mannitol. N,N-Dimethyl-p-nitrosoaniline, mixed with the lipid in a molar ratio of 1.3:1, completely prevents the oxidation of lipid due to decomposing peroxychromate at pH 9.0, but some linoleic acid is lost if the incubation is done at pH 7.6. If the concentration of this quench reagent is reduced tenfold, oxidation of linoleic acid by decomposing peroxychromate at pH 9.0 is observed. Hydrogen peroxide is capable of oxidizing the unsaturated fatty acids of lecithin dispersions. Catalase or boiled catalase (2 mg/ml) protects the lipid from oxidation due to decomposing peroxychromate at pH 7.6 to approximately the same extent, but their protective effect is believed to be due to the non-specific removal of .OH. It is concluded that .OH is the species responsible for the lipid oxidation caused by decomposing peroxychromate. This is consistent with the observed bleaching of N,N-dimethyl-p-nitrosoanaline and the formation of a characteristic paramagnetic .OH adduct of the spin trap, 5,5-dimethylpyrroline-1-oxide.

  4. A low-order model for wave propagation in random waveguides

    NASA Astrophysics Data System (ADS)

    Millet, Christophe; Bertin, Michael; Bouche, Daniel

    2014-11-01

    In numerical modeling of infrasound propagation in the atmosphere, the wind and temperature profiles are usually obtained as a result of matching atmospheric models to empirical data and thus inevitably involve some random errors. In the present approach, the sound speed profiles are considered as random functions and the wave equation is solved using a reduced-order model, starting from the classical normal mode technique. We focus on the asymptotic behavior of the transmitted waves in the weakly heterogeneous regime (the coupling between the wave and the medium is weak), with a fixed number of propagating modes that can be obtained by rearranging the eigenvalues by decreasing Sobol indices. The most important feature of the stochastic approach lies in the fact that the model order can be computed to satisfy a given statistical accuracy whatever the frequency. The statistics of a transmitted broadband pulse are computed by decomposing the original pulse into a sum of modal pulses that can be described by a front pulse stabilization theory. The method is illustrated on two large-scale infrasound calibration experiments, that were conducted at the Sayarim Military Range, Israel, in 2009 and 2011.

  5. Non-stationary least-squares complex decomposition for microseismic noise attenuation

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang

    2018-06-01

    Microseismic data processing and imaging are crucial for subsurface real-time monitoring during hydraulic fracturing process. Unlike the active-source seismic events or large-scale earthquake events, the microseismic event is usually of very small magnitude, which makes its detection challenging. The biggest trouble of microseismic data is the low signal-to-noise ratio issue. Because of the small energy difference between effective microseismic signal and ambient noise, the effective signals are usually buried in strong random noise. I propose a useful microseismic denoising algorithm that is based on decomposing a microseismic trace into an ensemble of components using least-squares inversion. Based on the predictive property of useful microseismic event along the time direction, the random noise can be filtered out via least-squares fitting of multiple damping exponential components. The method is flexible and almost automated since the only parameter needed to be defined is a decomposition number. I use some synthetic and real data examples to demonstrate the potential of the algorithm in processing complicated microseismic data sets.

  6. Contingency and statistical laws in replicate microbial closed ecosystems.

    PubMed

    Hekstra, Doeke R; Leibler, Stanislas

    2012-05-25

    Contingency, the persistent influence of past random events, pervades biology. To what extent, then, is each course of ecological or evolutionary dynamics unique, and to what extent are these dynamics subject to a common statistical structure? Addressing this question requires replicate measurements to search for emergent statistical laws. We establish a readily replicated microbial closed ecosystem (CES), sustaining its three species for years. We precisely measure the local population density of each species in many CES replicates, started from the same initial conditions and kept under constant light and temperature. The covariation among replicates of the three species densities acquires a stable structure, which could be decomposed into discrete eigenvectors, or "ecomodes." The largest ecomode dominates population density fluctuations around the replicate-average dynamics. These fluctuations follow simple power laws consistent with a geometric random walk. Thus, variability in ecological dynamics can be studied with CES replicates and described by simple statistical laws. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  8. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-01-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  9. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-08-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  10. A Global Approach to the Optimal Trajectory Based on an Improved Ant Colony Algorithm for Cold Spray

    NASA Astrophysics Data System (ADS)

    Cai, Zhenhua; Chen, Tingyang; Zeng, Chunnian; Guo, Xueping; Lian, Huijuan; Zheng, You; Wei, Xiaoxu

    2016-12-01

    This paper is concerned with finding a global approach to obtain the shortest complete coverage trajectory on complex surfaces for cold spray applications. A slicing algorithm is employed to decompose the free-form complex surface into several small pieces of simple topological type. The problem of finding the optimal arrangement of the pieces is translated into a generalized traveling salesman problem (GTSP). Owing to its high searching capability and convergence performance, an improved ant colony algorithm is then used to solve the GTSP. Through off-line simulation, a robot trajectory is generated based on the optimized result. The approach is applied to coat real components with a complex surface by using the cold spray system with copper as the spraying material.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ide, Toshiki; Hofmann, Holger F.; JST-CREST, Graduate School of Advanced Sciences of Matter, Hiroshima University, Kagamiyama 1-3-1, Higashi Hiroshima 739-8530

    The information encoded in the polarization of a single photon can be transferred to a remote location by two-channel continuous-variable quantum teleportation. However, the finite entanglement used in the teleportation causes random changes in photon number. If more than one photon appears in the output, the continuous-variable teleportation accidentally produces clones of the original input photon. In this paper, we derive the polarization statistics of the N-photon output components and show that they can be decomposed into an optimal cloning term and completely unpolarized noise. We find that the accidental cloning of the input photon is nearly optimal at experimentallymore » feasible squeezing levels, indicating that the loss of polarization information is partially compensated by the availability of clones.« less

  12. Signal enhancement based on complex curvelet transform and complementary ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Dong, Lieqian; Wang, Deying; Zhang, Yimeng; Zhou, Datong

    2017-09-01

    Signal enhancement is a necessary step in seismic data processing. In this paper we utilize the complementary ensemble empirical mode decomposition (CEEMD) and complex curvelet transform (CCT) methods to separate signal from random noise further to improve the signal to noise (S/N) ratio. Firstly, the original data with noise is decomposed into a series of intrinsic mode function (IMF) profiles with the aid of CEEMD. Then the IMFs with noise are transformed into CCT domain. By choosing different thresholds which are based on the noise level difference of each IMF profile, the noise in original data can be suppressed. Finally, we illustrate the effectiveness of the approach by simulated and field datasets.

  13. Mechanisms of jamming in the Nagel-Schreckenberg model for traffic flow.

    PubMed

    Bette, Henrik M; Habel, Lars; Emig, Thorsten; Schreckenberg, Michael

    2017-01-01

    We study the Nagel-Schreckenberg cellular automata model for traffic flow by both simulations and analytical techniques. To better understand the nature of the jamming transition, we analyze the fraction of stopped cars P(v=0) as a function of the mean car density. We present a simple argument that yields an estimate for the free density where jamming occurs, and show satisfying agreement with simulation results. We demonstrate that the fraction of jammed cars P(v∈{0,1}) can be decomposed into the three factors (jamming rate, jam lifetime, and jam size) for which we derive, from random walk arguments, exponents that control their scaling close to the critical density.

  14. Mechanisms of jamming in the Nagel-Schreckenberg model for traffic flow

    NASA Astrophysics Data System (ADS)

    Bette, Henrik M.; Habel, Lars; Emig, Thorsten; Schreckenberg, Michael

    2017-01-01

    We study the Nagel-Schreckenberg cellular automata model for traffic flow by both simulations and analytical techniques. To better understand the nature of the jamming transition, we analyze the fraction of stopped cars P (v =0 ) as a function of the mean car density. We present a simple argument that yields an estimate for the free density where jamming occurs, and show satisfying agreement with simulation results. We demonstrate that the fraction of jammed cars P (v ∈{0 ,1 }) can be decomposed into the three factors (jamming rate, jam lifetime, and jam size) for which we derive, from random walk arguments, exponents that control their scaling close to the critical density.

  15. Three-dimensional scene encryption and display based on computer-generated holograms.

    PubMed

    Kong, Dezhao; Cao, Liangcai; Jin, Guofan; Javidi, Bahram

    2016-10-10

    An optical encryption and display method for a three-dimensional (3D) scene is proposed based on computer-generated holograms (CGHs) using a single phase-only spatial light modulator. The 3D scene is encoded as one complex Fourier CGH. The Fourier CGH is then decomposed into two phase-only CGHs with random distributions by the vector stochastic decomposition algorithm. Two CGHs are interleaved as one final phase-only CGH for optical encryption and reconstruction. The proposed method can support high-level nonlinear optical 3D scene security and complex amplitude modulation of the optical field. The exclusive phase key offers strong resistances of decryption attacks. Experimental results demonstrate the validity of the novel method.

  16. Evaluating clustering methods within the Artificial Ecosystem Algorithm and their application to bike redistribution in London.

    PubMed

    Adham, Manal T; Bentley, Peter J

    2016-08-01

    This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  18. A complexity theory model in science education problem solving: random walks for working memory and mental capacity.

    PubMed

    Stamovlasis, Dimitrios; Tsaparlis, Georgios

    2003-07-01

    The present study examines the role of limited human channel capacity from a science education perspective. A model of science problem solving has been previously validated by applying concepts and tools of complexity theory (the working memory, random walk method). The method correlated the subjects' rank-order achievement scores in organic-synthesis chemistry problems with the subjects' working memory capacity. In this work, we apply the same nonlinear approach to a different data set, taken from chemical-equilibrium problem solving. In contrast to the organic-synthesis problems, these problems are algorithmic, require numerical calculations, and have a complex logical structure. As a result, these problems cause deviations from the model, and affect the pattern observed with the nonlinear method. In addition to Baddeley's working memory capacity, the Pascual-Leone's mental (M-) capacity is examined by the same random-walk method. As the complexity of the problem increases, the fractal dimension of the working memory random walk demonstrates a sudden drop, while the fractal dimension of the M-capacity random walk decreases in a linear fashion. A review of the basic features of the two capacities and their relation is included. The method and findings have consequences for problem solving not only in chemistry and science education, but also in other disciplines.

  19. Acute toxicity of live and decomposing green alga Ulva ( Enteromorpha) prolifera to abalone Haliotis discus hannai

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yu, Rencheng; Zhou, Mingjiang

    2011-05-01

    From 2007 to 2009, large-scale blooms of green algae (the so-called "green tides") occurred every summer in the Yellow Sea, China. In June 2008, huge amounts of floating green algae accumulated along the coast of Qingdao and led to mass mortality of cultured abalone and sea cucumber. However, the mechanism for the mass mortality of cultured animals remains undetermined. This study examined the toxic effects of Ulva ( Enteromorpha) prolifera, the causative species of green tides in the Yellow Sea during the last three years. The acute toxicity of fresh culture medium and decomposing algal effluent of U. prolifera to the cultured abalone Haliotis discus hannai were tested. It was found that both fresh culture medium and decomposing algal effluent had toxic effects to abalone, and decomposing algal effluent was more toxic than fresh culture medium. The acute toxicity of decomposing algal effluent could be attributed to the ammonia and sulfide presented in the effluent, as well as the hypoxia caused by the decomposition process.

  20. Plant–herbivore–decomposer stoichiometric mismatches and nutrient cycling in ecosystems

    PubMed Central

    Cherif, Mehdi; Loreau, Michel

    2013-01-01

    Plant stoichiometry is thought to have a major influence on how herbivores affect nutrient availability in ecosystems. Most conceptual models predict that plants with high nutrient contents increase nutrient excretion by herbivores, in turn raising nutrient availability. To test this hypothesis, we built a stoichiometrically explicit model that includes a simple but thorough description of the processes of herbivory and decomposition. Our results challenge traditional views of herbivore impacts on nutrient availability in many ways. They show that the relationship between plant nutrient content and the impact of herbivores predicted by conceptual models holds only at high plant nutrient contents. At low plant nutrient contents, the impact of herbivores is mediated by the mineralization/immobilization of nutrients by decomposers and by the type of resource limiting the growth of decomposers. Both parameters are functions of the mismatch between plant and decomposer stoichiometries. Our work provides new predictions about the impacts of herbivores on ecosystem fertility that depend on critical interactions between plant, herbivore and decomposer stoichiometries in ecosystems. PMID:23303537

  1. Gas Sensitivity and Sensing Mechanism Studies on Au-Doped TiO2 Nanotube Arrays for Detecting SF6 Decomposed Components

    PubMed Central

    Zhang, Xiaoxing; Yu, Lei; Tie, Jing; Dong, Xingchen

    2014-01-01

    The analysis to SF6 decomposed component gases is an efficient diagnostic approach to detect the partial discharge in gas-insulated switchgear (GIS) for the purpose of accessing the operating state of power equipment. This paper applied the Au-doped TiO2 nanotube array sensor (Au-TiO2 NTAs) to detect SF6 decomposed components. The electrochemical constant potential method was adopted in the Au-TiO2 NTAs' fabrication, and a series of experiments were conducted to test the characteristic SF6 decomposed gases for a thorough investigation of sensing performances. The sensing characteristic curves of intrinsic and Au-doped TiO2 NTAs were compared to study the mechanism of the gas sensing response. The results indicated that the doped Au could change the TiO2 nanotube arrays' performances of gas sensing selectivity in SF6 decomposed components, as well as reducing the working temperature of TiO2 NTAs. PMID:25330053

  2. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  3. Coordinated platooning with multiple speeds

    DOE PAGES

    Luo, Fengqiao; Larson, Jeffrey; Munson, Todd

    2018-03-22

    In a platoon, vehicles travel one after another with small intervehicle distances; trailing vehicles in a platoon save fuel because they experience less aerodynamic drag. This work presents a coordinated platooning model with multiple speed options that integrates scheduling, routing, speed selection, and platoon formation/dissolution in a mixed-integer linear program that minimizes the total fuel consumed by a set of vehicles while traveling between their respective origins and destinations. The performance of this model is numerically tested on a grid network and the Chicago-area highway network. We find that the fuel-savings factor of a multivehicle system significantly depends on themore » time each vehicle is allowed to stay in the network; this time affects vehicles’ available speed choices, possible routes, and the amount of time for coordinating platoon formation. For problem instances with a large number of vehicles, we propose and test a heuristic decomposed approach that applies a clustering algorithm to partition the set of vehicles and then routes each group separately. When the set of vehicles is large and the available computational time is small, the decomposed approach finds significantly better solutions than does the full model.« less

  4. Coordinated platooning with multiple speeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Fengqiao; Larson, Jeffrey; Munson, Todd

    In a platoon, vehicles travel one after another with small intervehicle distances; trailing vehicles in a platoon save fuel because they experience less aerodynamic drag. This work presents a coordinated platooning model with multiple speed options that integrates scheduling, routing, speed selection, and platoon formation/dissolution in a mixed-integer linear program that minimizes the total fuel consumed by a set of vehicles while traveling between their respective origins and destinations. The performance of this model is numerically tested on a grid network and the Chicago-area highway network. We find that the fuel-savings factor of a multivehicle system significantly depends on themore » time each vehicle is allowed to stay in the network; this time affects vehicles’ available speed choices, possible routes, and the amount of time for coordinating platoon formation. For problem instances with a large number of vehicles, we propose and test a heuristic decomposed approach that applies a clustering algorithm to partition the set of vehicles and then routes each group separately. When the set of vehicles is large and the available computational time is small, the decomposed approach finds significantly better solutions than does the full model.« less

  5. Using Computer-Generated Random Numbers to Calculate the Lifetime of a Comet.

    ERIC Educational Resources Information Center

    Danesh, Iraj

    1991-01-01

    An educational technique to calculate the lifetime of a comet using software-generated random numbers is introduced to undergraduate physiques and astronomy students. Discussed are the generation and eligibility of the required random numbers, background literature related to the problem, and the solution to the problem using random numbers.…

  6. A new neural network model for solving random interval linear programming problems.

    PubMed

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  8. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  9. An experimental study of postmortem decomposition of methomyl in blood.

    PubMed

    Kawakami, Yuka; Fuke, Chiaki; Fukasawa, Maki; Ninomiya, Kenji; Ihama, Yoko; Miyazaki, Tetsuji

    2017-03-01

    Methomyl (S-methyl-1-N-[(methylcarbamoyl)oxy]thioacetimidate) is a carbamate pesticide. It has been noted that in some cases of methomyl poisoning, methomyl is either not detected or detected only in low concentrations in the blood of the victims. However, in such cases, methomyl is detected at higher concentrations in the vitreous humor than in the blood. This indicates that methomyl in the blood is possibly decomposed after death. However, the reasons for this phenomenon have been unclear. We have previously reported that methomyl is decomposed to dimethyl disulfide (DMDS) in the livers and kidneys of pigs but not in their blood. In addition, in the field of forensic toxicology, it is known that some compounds are decomposed or produced by internal bacteria in biological samples after death. This indicates that there is a possibility that methomyl in blood may be decomposed by bacteria after death. The aim of this study was therefore to investigate whether methomyl in blood is decomposed by bacteria isolated from human stool. Our findings demonstrated that methomyl was decomposed in human stool homogenates, resulting in the generation of DMDS. In addition, it was observed that three bacterial species isolated from the stool homogenates, Bacillus cereus, Pseudomonas aeruginosa, and Bacillus sp., showed methomyl-decomposing activity. The results therefore indicated that one reason for the difficulty in detecting methomyl in postmortem blood from methomyl-poisoning victims is the decomposition of methomyl by internal bacteria such as B. cereus, P. aeruginosa, and Bacillus sp. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Vertebrate Decomposition Is Accelerated by Soil Microbes

    PubMed Central

    Lauber, Christian L.; Metcalf, Jessica L.; Keepers, Kyle; Ackermann, Gail; Carter, David O.

    2014-01-01

    Carrion decomposition is an ecologically important natural phenomenon influenced by a complex set of factors, including temperature, moisture, and the activity of microorganisms, invertebrates, and scavengers. The role of soil microbes as decomposers in this process is essential but not well understood and represents a knowledge gap in carrion ecology. To better define the role and sources of microbes in carrion decomposition, lab-reared mice were decomposed on either (i) soil with an intact microbial community or (ii) soil that was sterilized. We characterized the microbial community (16S rRNA gene for bacteria and archaea, and the 18S rRNA gene for fungi and microbial eukaryotes) for three body sites along with the underlying soil (i.e., gravesoils) at time intervals coinciding with visible changes in carrion morphology. Our results indicate that mice placed on soil with intact microbial communities reach advanced stages of decomposition 2 to 3 times faster than those placed on sterile soil. Microbial communities associated with skin and gravesoils of carrion in stages of active and advanced decay were significantly different between soil types (sterile versus untreated), suggesting that substrates on which carrion decompose may partially determine the microbial decomposer community. However, the source of the decomposer community (soil- versus carcass-associated microbes) was not clear in our data set, suggesting that greater sequencing depth needs to be employed to identify the origin of the decomposer communities in carrion decomposition. Overall, our data show that soil microbial communities have a significant impact on the rate at which carrion decomposes and have important implications for understanding carrion ecology. PMID:24907317

  11. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  12. The Prevention Program for Externalizing Problem Behavior (PEP) Improves Child Behavior by Reducing Negative Parenting: Analysis of Mediating Processes in a Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Hanisch, Charlotte; Hautmann, Christopher; Plück, Julia; Eichelberger, Ilka; Döpfner, Manfred

    2014-01-01

    Background: Our indicated Prevention program for preschool children with Externalizing Problem behavior (PEP) demonstrated improved parenting and child problem behavior in a randomized controlled efficacy trial and in a study with an effectiveness design. The aim of the present analysis of data from the randomized controlled trial was to identify…

  13. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  14. Group identification in Indonesian stock market

    NASA Astrophysics Data System (ADS)

    Nurriyadi Suparno, Ervano; Jo, Sung Kyun; Lim, Kyuseong; Purqon, Acep; Kim, Soo Yong

    2016-08-01

    The characteristic of Indonesian stock market is interesting especially because it represents developing countries. We investigate the dynamics and structures by using Random Matrix Theory (RMT). Here, we analyze the cross-correlation of the fluctuations of the daily closing price of stocks from the Indonesian Stock Exchange (IDX) between January 1, 2007, and October 28, 2014. The eigenvalue distribution of the correlation matrix consists of noise which is filtered out using the random matrix as a control. The bulk of the eigenvalue distribution conforms to the random matrix, allowing the separation of random noise from original data which is the deviating eigenvalues. From the deviating eigenvalues and the corresponding eigenvectors, we identify the intrinsic normal modes of the system and interpret their meaning based on qualitative and quantitative approach. The results show that the largest eigenvector represents the market-wide effect which has a predominantly common influence toward all stocks. The other eigenvectors represent highly correlated groups within the system. Furthermore, identification of the largest components of the eigenvectors shows the sector or background of the correlated groups. Interestingly, the result shows that there are mainly two clusters within IDX, natural and non-natural resource companies. We then decompose the correlation matrix to investigate the contribution of the correlated groups to the total correlation, and we find that IDX is still driven mainly by the market-wide effect.

  15. Into the decomposed body-forensic digital autopsy using multislice-computed tomography.

    PubMed

    Thali, M J; Yen, K; Schweitzer, W; Vock, P; Ozdoba, C; Dirnhofer, R

    2003-07-08

    It is impossible to obtain a representative anatomical documentation of an entire body using classical X-ray methods, they subsume three-dimensional bodies into a two-dimensional level. We used the novel multislice-computed tomography (MSCT) technique in order to evaluate a case of homicide with putrefaction of the corpse before performing a classical forensic autopsy. This non-invasive method showed gaseous distension of the decomposing organs and tissues in detail as well as a complex fracture of the calvarium. MSCT also proved useful in screening for foreign matter in decomposing bodies, and full-body scanning took only a few minutes. In conclusion, we believe postmortem MSCT imaging is an excellent vizualisation tool with great potential for forensic documentation and evaluation of decomposed bodies.

  16. Model and algorithm for container ship stowage planning based on bin-packing problem

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Ying; Lin, Yan; Ji, Zhuo-Shang

    2005-09-01

    In a general case, container ship serves many different ports on each voyage. A stowage planning for container ship made at one port must take account of the influence on subsequent ports. So the complexity of stowage planning problem increases due to its multi-ports nature. This problem is NP-hard problem. In order to reduce the computational complexity, the problem is decomposed into two sub-problems in this paper. First, container ship stowage problem (CSSP) is regarded as “packing problem”, ship-bays on the board of vessel are regarded as bins, the number of slots at each bay are taken as capacities of bins, and containers with different characteristics (homogeneous containers group) are treated as items packed. At this stage, there are two objective functions, one is to minimize the number of bays packed by containers and the other is to minimize the number of overstows. Secondly, containers assigned to each bays at first stage are allocate to special slot, the objective functions are to minimize the metacentric height, heel and overstows. The taboo search heuristics algorithm are used to solve the subproblem. The main focus of this paper is on the first subproblem. A case certifies the feasibility of the model and algorithm.

  17. Portfolio optimization and the random magnet problem

    NASA Astrophysics Data System (ADS)

    Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.

    2002-08-01

    Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.

  18. Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association

    PubMed Central

    Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You

    2017-01-01

    This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets’ state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems. PMID:29113085

  19. Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association.

    PubMed

    Liu, Yu; Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You

    2017-11-05

    This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets' state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems.

  20. Design of concrete waste basin in Integrated Temporarily Sanitary Landfill (ITSL) in Siosar, Karo Regency, Indonesia on supporting clean environment and sustainable fertilizers for farmers

    NASA Astrophysics Data System (ADS)

    Ginting, N.; Siahaan, J.; Tarigan, A. P.

    2018-03-01

    A new settlement in Siosar village of Karo Regency has been developed for people whose villages have been completely destroyed by the prolong eruptions of Sinabung. An integrated temporarily sanitary landfill (ITSL) was built there to support the new living environment. The objective of this study is to investigate the organic waste decomposing in order to improve the design of the conventional concrete waste basin installed in the ITSL. The study was last from May until August 2016. The used design was Completely Randomized Design (CRD) in which organic waste was treated using decomposer with five replications in three composter bins. Decomposting process lasted for three weeks. Research parameters were pH, temperature, waste reduction in weight, C/N, and organic fertilizer production(%). The results of waste compost as follows : pH was 9.45, ultimate temperature was 31.6°C, C/N was in the range of 10.5-12.4, waste reduction was 53% and organic fertilizer production was 47%. Based on the decomposting process and the analysis, it is recommended that the conventional concrete waste basin should be divided into three colums and each column would be filled with waste when previous column is fulled. It is predicted that when the third column is fully occupied then the waste in the first column already become a sustainable fertilizer.

  1. Microwave Absorption Characteristics of Tire

    NASA Astrophysics Data System (ADS)

    Zhang, Yuzhe; Hwang, Jiann-Yang; Peng, Zhiwei; Andriese, Matthew; Li, Bowen; Huang, Xiaodi; Wang, Xinli

    The recycling of waste tires has been a big environmental problem. About 280 million waste tires are produced annually in the United States and more than 2 billion tires are stockpiled, which cause fire hazards and health issues. Tire rubbers are insoluble elastic high polymer materials. They are not biodegradable and may take hundreds of years to decompose in the natural environment. Microwave irradiation can be a thermal processing method for the decomposition of tire rubbers. In this study, the microwave absorption properties of waste tire at various temperatures are characterized to determine the conditions favorable for the microwave heating of waste tires.

  2. Nuclide Depletion Capabilities in the Shift Monte Carlo Code

    DOE PAGES

    Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...

    2017-12-21

    A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.

  3. Self-reduction of a copper complex MOD ink for inkjet printing conductive patterns on plastics.

    PubMed

    Farraj, Yousef; Grouchko, Michael; Magdassi, Shlomo

    2015-01-31

    Highly conductive copper patterns on low-cost flexible substrates are obtained by inkjet printing a metal complex based ink. Upon heating the ink, the soluble complex, which is composed of copper formate and 2-amino-2-methyl-1-propanol, decomposes under nitrogen at 140 °C and is converted to pure metallic copper. The decomposition process of the complex is investigated and a suggested mechanism is presented. The ink is stable in air for prolonged periods, with no sedimentation or oxidation problems, which are usually encountered in copper nanoparticle based inks.

  4. Joint terminals and relay optimization for two-way power line information exchange systems with QoS constraints

    NASA Astrophysics Data System (ADS)

    Wu, Xiaolin; Rong, Yue

    2015-12-01

    The quality-of-service (QoS) criteria (measured in terms of the minimum capacity requirement in this paper) are very important to practical indoor power line communication (PLC) applications as they greatly affect the user experience. With a two-way multicarrier relay configuration, in this paper we investigate the joint terminals and relay power optimization for the indoor broadband PLC environment, where the relay node works in the amplify-and-forward (AF) mode. As the QoS-constrained power allocation problem is highly non-convex, the globally optimal solution is computationally intractable to obtain. To overcome this challenge, we propose an alternating optimization (AO) method to decompose this problem into three convex/quasi-convex sub-problems. Simulation results demonstrate the fast convergence of the proposed algorithm under practical PLC channel conditions. Compared with the conventional bidirectional direct transmission (BDT) system, the relay-assisted two-way information exchange (R2WX) scheme can meet the same QoS requirement with less total power consumption.

  5. A design for an intelligent monitor and controller for space station electrical power using parallel distributed problem solving

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.

    1990-01-01

    The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.

  6. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  7. Influence of neurobehavioral incentive valence and magnitude on alcohol drinking behavior

    PubMed Central

    Joseph, Jane E.; Zhu, Xun; Corbly, Christine R.; DeSantis, Stacia; Lee, Dustin C.; Baik, Grace; Kiser, Seth; Jiang, Yang; Lynam, Donald R.; Kelly, Thomas H.

    2014-01-01

    The monetary incentive delay (MID) task is a widely used probe for isolating neural circuitry in the human brain associated with incentive motivation. In the present functional magnetic resonance imaging (fMRI) study, 82 young adults, characterized along dimensions of impulsive sensation seeking, completed a MID task. fMRI and behavioral incentive functions were decomposed into incentive valence and magnitude parameters, which were used as predictors in linear regression to determine whether mesolimbic response is associated with problem drinking and recent alcohol use. Alcohol use was best explained by higher fMRI response to anticipation of losses and feedback on high gains in the thalamus. In contrast, problem drinking was best explained by reduced sensitivity to large incentive values in meso-limbic regions in the anticipation phase and increased sensitivity to small incentive values in the dorsal caudate nucleus in the feedback phase. Altered fMRI responses to monetary incentives in mesolimbic circuitry, particularly those alterations associated with problem drinking, may serve as potential early indicators of substance abuse trajectories. PMID:25261001

  8. Evaluation of the SSRCT engine with a hydrazine as a fuel, phase 1

    NASA Technical Reports Server (NTRS)

    Minton, S. J.

    1978-01-01

    The performance parameters for the space shuttle reaction control thruster (SSRCT) when the fuel is changed from monomethylhydrazine to hydrazine were predicted. Potential problems are higher chamber wall temperature during steady state operation and explosive events during pulse mode operation. Solutions to the problems are suggested. To conduct the analysis, a more realistic film cooling model was devised which considers that hydrazine based fuels are reactive when used as a film coolant on the walls of the combustion chamber. Hydrazine based fuels can decompose exothermally as a monopropellant and also enter into bipropellant reactions with any excess oxidizer in the combustion chamber. It is concluded that the conversion of the thruster from MMH to hydrazine fuel is feasible but that a number of changes would be required to achieve the same safety margins as the monomethylhydrazine-fueled thruster.

  9. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  10. Stringy Toda cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaloper, N.

    We discuss a particular stringy modular cosmology with two axion fields in seven space-time dimensions, decomposable as a time and two flat three-spaces. The effective equations of motion for the problem are those of the SU(3) Toda molecule and, hence, are integrable. We write down the solutions, and show that all of them are singular. They can be thought of as a generalization of the pre-big-bang cosmology with excited internal degrees of freedom, and still suffering from the graceful exit problem. Some of the solutions, however, show a rather unexpected property: some of their spatial sections shrink to a pointmore » in spite of winding modes wrapped around them. We also comment how more general, anisotropic solutions, with fewer Killing symmetries, can be obtained with the help of STU dualities. {copyright} {ital 1997} {ital The American Physical Society}« less

  11. Wave propagation problem for a micropolar elastic waveguide

    NASA Astrophysics Data System (ADS)

    Kovalev, V. A.; Murashkin, E. V.; Radayev, Y. N.

    2018-04-01

    A propagation problem for coupled harmonic waves of translational displacements and microrotations along the axis of a long cylindrical waveguide is discussed at present study. Microrotations modeling is carried out within the linear micropolar elasticity frameworks. The mathematical model of the linear (or even nonlinear) micropolar elasticity is also expanded to a field theory model by variational least action integral and the least action principle. The governing coupled vector differential equations of the linear micropolar elasticity are given. The translational displacements and microrotations in the harmonic coupled wave are decomposed into potential and vortex parts. Calibrating equations providing simplification of the equations for the wave potentials are proposed. The coupled differential equations are then reduced to uncoupled ones and finally to the Helmholtz wave equations. The wave equations solutions for the translational and microrotational waves potentials are obtained for a high-frequency range.

  12. The influence of body position and microclimate on ketamine and metabolite distribution in decomposed skeletal remains.

    PubMed

    Cornthwaite, H M; Watterson, J H

    2014-10-01

    The influence of body position and microclimate on ketamine (KET) and metabolite distribution in decomposed bone tissue was examined. Rats received 75 mg/kg (i.p.) KET (n = 30) or remained drug-free (controls, n = 4). Following euthanasia, rats were divided into two groups and placed outdoors to decompose in one of the three positions: supine (SUP), prone (PRO) or upright (UPR). One group decomposed in a shaded, wooded microclimate (Site 1) while the other decomposed in an exposed sunlit microclimate with gravel substrate (Site 2), roughly 500 m from Site 1. Following decomposition, bones (lumbar vertebrae, thoracic vertebra, cervical vertebrae, rib, pelvis, femora, tibiae, humeri and scapulae) were collected and sorted for analysis. Clean, ground bones underwent microwave-assisted extraction using acetone : hexane mixture (1 : 1, v/v), followed by solid-phase extraction and analysis using GC-MS. Drug levels, expressed as mass normalized response ratios, were compared across all bone types between body position and microclimates. Bone type was a main effect (P < 0.05) for drug level and drug/metabolite level ratio for all body positions and microclimates examined. Microclimate and body position significantly influenced observed drug levels: higher levels were observed in carcasses decomposing in direct sunlight, where reduced entomological activity led to slowed decomposition. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  14. On decoupling of volatility smile and term structure in inverse option pricing

    NASA Astrophysics Data System (ADS)

    Egger, Herbert; Hein, Torsten; Hofmann, Bernd

    2006-08-01

    Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.

  15. Toxicity to woodlice of zinc and lead oxides added to soil litter

    USGS Publications Warehouse

    Beyer, W.N.; Anderson, A.

    1985-01-01

    Previous studies have shown that high concentrations of metals in soil are associated with reductions in decomposer populations. We have here determined the relation between the concentrations of lead and zinc added as oxides to soil litter and the survival and reproduction of a decomposer population under controlled conditions. Laboratory populations of woodlice (Porcellio scaber Latr) were fed soil litter treated with lead or zinc at concentrations that ranged from 100 to 12,800 ppm. The survival of the adults, the maximum number of young alive, and the average number of young alive, were recorded over 64 weeks. Lead at 12,800 ppm and zinc at 1,600 ppm or more had statistically significant (p < 0.05) negative effects on the populations. These results agree with field observations suggesting that lead and zinc have reduced populations of decomposers in contaminated forest soil litter, and concentrations are similar to those reported to be associated with reductions in natural populations of decomposers. Poisoning of decomposers may disrupt nutrient cycling, reduce the numbers of invertebrates available to other wildlife for food, and contribute to the contamination of food chains.

  16. Cat got your tongue? Using the tip-of-the-tongue state to investigate fixed expressions.

    PubMed

    Nordmann, Emily; Cleland, Alexandra A; Bull, Rebecca

    2013-01-01

    Despite the fact that they play a prominent role in everyday speech, the representation and processing of fixed expressions during language production is poorly understood. Here, we report a study investigating the processes underlying fixed expression production. "Tip-of-the-tongue" (TOT) states were elicited for well-known idioms (e.g., hit the nail on the head) and participants were asked to report any information they could regarding the content of the phrase. Participants were able to correctly report individual words for idioms that they could not produce. In addition, participants produced both figurative (e.g., pretty for easy on the eye) and literal errors (e.g., hammer for hit the nail on the head) when in a TOT state, suggesting that both figurative and literal meanings are active during production. There was no effect of semantic decomposability on overall TOT incidence; however, participants recalled a greater proportion of words for decomposable rather than non-decomposable idioms. This finding suggests there may be differences in how decomposable and non-decomposable idioms are retrieved during production. Copyright © 2013 Cognitive Science Society, Inc.

  17. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  18. Stochastic stability in three-player games.

    PubMed

    Kamiński, Dominik; Miekisz, Jacek; Zaborowski, Marcin

    2005-11-01

    Animal behavior and evolution can often be described by game-theoretic models. Although in many situations the number of players is very large, their strategic interactions are usually decomposed into a sum of two-player games. Only recently were evolutionarily stable strategies defined for multi-player games and their properties analyzed [Broom, M., Cannings, C., Vickers, G.T., 1997. Multi-player matrix games. Bull. Math. Biol. 59, 931-952]. Here we study the long-run behavior of stochastic dynamics of populations of randomly matched individuals playing symmetric three-player games. We analyze the stochastic stability of equilibria in games with multiple evolutionarily stable strategies. We also show that, in some games, a population may not evolve in the long run to an evolutionarily stable equilibrium.

  19. Spatial Epidemic Modelling in Social Networks

    NASA Astrophysics Data System (ADS)

    Simoes, Joana Margarida

    2005-06-01

    The spread of infectious diseases is highly influenced by the structure of the underlying social network. The target of this study is not the network of acquaintances, but the social mobility network: the daily movement of people between locations, in regions. It was already shown that this kind of network exhibits small world characteristics. The model developed is agent based (ABM) and comprehends a movement model and a infection model. In the movement model, some assumptions are made about its structure and the daily movement is decomposed into four types: neighborhood, intra region, inter region and random. The model is Geographical Information Systems (GIS) based, and uses real data to define its geometry. Because it is a vector model, some optimization techniques were used to increase its efficiency.

  20. An optimality framework to predict decomposer carbon-use efficiency trends along stoichiometric gradients

    NASA Astrophysics Data System (ADS)

    Manzoni, S.; Capek, P.; Mooshammer, M.; Lindahl, B.; Richter, A.; Santruckova, H.

    2016-12-01

    Litter and soil organic matter decomposers feed on substrates with much wider C:N and C:P ratios then their own cellular composition, raising the question as to how they can adapt their metabolism to such a chronic stoichiometric imbalance. Here we propose an optimality framework to address this question, based on the hypothesis that carbon-use efficiency (CUE) can be optimally adjusted to maximize the decomposer growth rate. When nutrients are abundant, increasing CUE improves decomposer growth rate, at the expense of higher nutrient demand. However, when nutrients are scarce, increased nutrient demand driven by high CUE can trigger nutrient limitation and inhibit growth. An intermediate, `optimal' CUE ensures balanced growth at the verge of nutrient limitation. We derive a simple analytical equation that links this optimal CUE to organic substrate and decomposer biomass C:N and C:P ratios, and to the rate of inorganic nutrient supply (e.g., fertilization). This equation allows formulating two specific hypotheses: i) decomposer CUE should increase with widening organic substrate C:N and C:P ratios with a scaling exponent between 0 (with abundant inorganic nutrients) and -1 (scarce inorganic nutrients), and ii) CUE should increase with increasing inorganic nutrient supply, for a given organic substrate stoichiometry. These hypotheses are tested using a new database encompassing nearly 2000 estimates of CUE from about 160 studies, spanning aquatic and terrestrial decomposers of litter and more stabilized organic matter. The theoretical predictions are largely confirmed by our data analysis, except for the lack of fertilization effects on terrestrial decomposer CUE. While stoichiometric drivers constrain the general trends in CUE, the relatively large variability in CUE estimates suggests that other factors could be at play as well. For example, temperature is often cited as a potential driver of CUE, but we only found limited evidence of temperature effects, although in some subsets of data, temperature and substrate stoichiometry appeared to interact. Based on our results, the optimality principle can provide a solid (but still incomplete) framework to develop CUE models for large-scale applications.

  1. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  3. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  4. The benefits of paid employment among persons with common mental health problems: evidence for the selection and causation mechanism.

    PubMed

    Schuring, Merel; Robroek, Suzan Jw; Burdorf, Alex

    2017-11-01

    Objectives The aims of this study were to (i) investigate the impact of paid employment on self-rated health, self-esteem, mastery, and happiness among previously unemployed persons with common mental health problems, and (ii) determine whether there are educational inequalities in these effects. Methods A quasi-experimental study was performed with a two-year follow-up period among unemployed persons with mental health problems. Eligible participants were identified at the social services departments of five cities in The Netherlands when being diagnosed with a common mental disorder, primarily depression and anxiety disorders, in the past 12 months by a physician (N=749). Employment status (defined as paid employment for ≥12 hours/week), mental health [Short Form 12 (SF-12)], physical health (SF-12), self-esteem, mastery, and happiness were measured at baseline, after 12 months and 24 months. The repeated-measurement longitudinal data were analyzed using a hybrid method, combining fixed and random effects. The regression coefficient was decomposed into between- and within-individual associations, respectively. Results The between-individuals associations showed that persons working ≥12 hours per week reported better mental health (b=26.7, SE 5.1), mastery (b=2.7, SE 0.6), self-esteem (b=5.7, SE 1.1), physical health (b=14.6, SE 5.6) and happiness (OR 7.7, 95% CI 2.3-26.4). The within-individual associations showed that entering paid employment for ≥12 hours per week resulted in better mental health (b=16.3, SE 3.4), mastery (b=1.7, SE 0.4), self-esteem (b=3.4, SE 0.7), physical health (b=9.8, SE 2.9), and happiness (OR 3.1, 95% CI 1.4-6.9). Among intermediate- and high-educated persons, entering paid employment had significantly larger effect on mental health than among low-educated persons. Conclusions This study provides evidence that entering paid employment has a positive impact on self-reported health; thus work should be considered as an important part of health promotion programs among unemployed persons.

  5. Social Problem Solving and Depressive Symptoms over Time: A Randomized Clinical Trial of Cognitive-Behavioral Analysis System of Psychotherapy, Brief Supportive Psychotherapy, and Pharmacotherapy

    ERIC Educational Resources Information Center

    Klein, Daniel N.; Leon, Andrew C.; Li, Chunshan; D'Zurilla, Thomas J.; Black, Sarah R.; Vivian, Dina; Dowling, Frank; Arnow, Bruce A.; Manber, Rachel; Markowitz, John C.; Kocsis, James H.

    2011-01-01

    Objective: Depression is associated with poor social problem solving, and psychotherapies that focus on problem-solving skills are efficacious in treating depression. We examined the associations between treatment, social problem solving, and depression in a randomized clinical trial testing the efficacy of psychotherapy augmentation for…

  6. Seminar on Understanding Digital Control and Analysis in Vibration Test Systems, part 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A number of techniques for dealing with important technical aspects of the random vibration control problem are described. These include the generation of pseudo-random and true random noise, the control spectrum estimation problem, the accuracy/speed tradeoff, and control correction strategies. System hardware, the operator-system interface, safety features, and operational capabilities of sophisticated digital random vibration control systems are also discussed.

  7. Improving Language Comprehension in Preschool Children with Language Difficulties: A Cluster Randomized Trial

    ERIC Educational Resources Information Center

    Hagen, Åste M.; Melby-Lervåg, Monica; Lervåg, Arne

    2017-01-01

    Background: Children with language comprehension difficulties are at risk of educational and social problems, which in turn impede employment prospects in adulthood. However, few randomized trials have examined how such problems can be ameliorated during the preschool years. Methods: We conducted a cluster randomized trial in 148 preschool…

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, J. A. M.; Jiang, J.; Post, W. M.

    Carbon cycle models often lack explicit belowground organism activity, yet belowground organisms regulate carbon storage and release in soil. Ectomycorrhizal fungi are important players in the carbon cycle because they are a conduit into soil for carbon assimilated by the plant. It is hypothesized that ectomycorrhizal fungi can also be active decomposers when plant carbon allocation to fungi is low. Here, we reviewed the literature on ectomycorrhizal decomposition and we developed a simulation model of the plant-mycorrhizae interaction where a reduction in plant productivity stimulates ectomycorrhizal fungi to decompose soil organic matter. Our review highlights evidence demonstrating the potential formore » ectomycorrhizal fungi to decompose soil organic matter. Our model output suggests that ectomycorrhizal activity accounts for a portion of carbon decomposed in soil, but this portion varied with plant productivity and the mycorrhizal carbon uptake strategy simulated. Lower organic matter inputs to soil were largely responsible for reduced soil carbon storage. Using mathematical theory, we demonstrated that biotic interactions affect predictions of ecosystem functions. Specifically, we developed a simple function to model the mycorrhizal switch in function from plant symbiont to decomposer. In conclusion, we show that including mycorrhizal fungi with the flexibility of mutualistic and saprotrophic lifestyles alters predictions of ecosystem function.« less

  9. Steinhaus’ Geometric Location Problem for Random Samples in the Plane.

    DTIC Science & Technology

    1982-05-11

    NAL 411R A1, ’I 7 - I STEINHAUS ’ GEOMETRIC LOCATION PROBLEM FOR RANDOM SAMPLES IN THE PLANE By Dorit Hochbaum and J. Michael Steele TECHNICAL REPORT...DEPARTMENT OF STATISTICS -Dltrib’ytion/ STANFORD UNIVERSITY A-I.abilty Codes STANFORD, CALIFORNIA Dist Spciat ecial Steinhaus ’ Geometric Location Problem for...Random Samples in the Plane By Dorit Hochbaum and J. Michael Steele I. Introduction. The work of H. Steinhaus U wf94 as apparently the first explicit

  10. Linkages between below and aboveground communities: Decomposer responses to simulated tree species loss are largely additive.

    Treesearch

    Becky A. Ball; Mark A. Bradford; Dave C. Coleman; Mark D. Hunter

    2009-01-01

    Inputs of aboveground plant litter influence the abundance and activities of belowground decomposer biota. Litter-mixing studies have examined whether the diversity and heterogeneity of litter inputs...

  11. A domain-decomposed multi-model plasma simulation of collisionless magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Datta, I. A. M.; Shumlak, U.; Ho, A.; Miller, S. T.

    2017-10-01

    Collisionless magnetic reconnection is a process relevant to many areas of plasma physics in which energy stored in magnetic fields within highly conductive plasmas is rapidly converted into kinetic and thermal energy. Both in natural phenomena such as solar flares and terrestrial aurora as well as in magnetic confinement fusion experiments, the reconnection process is observed on timescales much shorter than those predicted by a resistive MHD model. As a result, this topic is an active area of research in which plasma models with varying fidelity have been tested in order to understand the proper physics explaining the reconnection process. In this research, a hybrid multi-model simulation employing the Hall-MHD and two-fluid plasma models on a decomposed domain is used to study this problem. The simulation is set up using the WARPXM code developed at the University of Washington, which uses a discontinuous Galerkin Runge-Kutta finite element algorithm and implements boundary conditions between models in the domain to couple their variable sets. The goal of the current work is to determine the parameter regimes most appropriate for each model to maintain sufficient physical fidelity over the whole domain while minimizing computational expense. This work is supported by a Grant from US AFOSR.

  12. Method for preparing a thick film conductor

    DOEpatents

    Nagesh, Voddarahalli K.; Fulrath, deceased, Richard M.

    1978-01-01

    A method for preparing a thick film conductor which comprises providing surface active glass particles, mixing the surface active glass particles with a thermally decomposable organometallic compound, for example, a silver resinate, and then decomposing the organometallic compound by heating, thereby chemically depositing metal on the glass particles. The glass particle mixture is applied to a suitable substrate either before or after the organometallic compound is thermally decomposed. The resulting system is then fired in an oxidizing atmosphere, providing a microstructure of glass particles substantially uniformly coated with metal.

  13. Atom economy and green elimination of nitric oxide using ZrN powders.

    PubMed

    Chen, Ning; Wang, Jigang; Yin, Wenyan; Li, Zhen; Li, Peishen; Guo, Ming; Wang, Qiang; Li, Chunlei; Wang, Changzheng; Chen, Shaowei

    2018-05-01

    Nitric oxide (NO) may cause serious environmental problems, such as acid rain, haze weather, global warming and even death. Herein, a new low-cost, highly efficient and green method for the elimination of NO using zirconium nitride (ZrN) is reported for the first time, which does not produce any waste or any by-product. Relevant experimental parameters, such as reaction temperature and gas concentration, were investigated to explore the reaction mechanism. Interestingly, NO can be easily decomposed into nitrogen (N 2 ) by ZrN powders at 600°C with ZrN simultaneously transformed into zirconium dioxide (ZrO 2 ) gradually. The time for the complete conversion of NO into N 2 was approximately 14 h over 0.5 g of ZrN at a NO concentration of 500 ppm. This green elimination process of NO demonstrated good atom economy and practical significance in mitigating environmental problems.

  14. Decomposition Technique for Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  15. The Caltech Concurrent Computation Program - Project description

    NASA Technical Reports Server (NTRS)

    Fox, G.; Otto, S.; Lyzenga, G.; Rogstad, D.

    1985-01-01

    The Caltech Concurrent Computation Program wwhich studies basic issues in computational science is described. The research builds on initial work where novel concurrent hardware, the necessary systems software to use it and twenty significant scientific implementations running on the initial 32, 64, and 128 node hypercube machines have been constructed. A major goal of the program will be to extend this work into new disciplines and more complex algorithms including general packages that decompose arbitrary problems in major application areas. New high-performance concurrent processors with up to 1024-nodes, over a gigabyte of memory and multigigaflop performance are being constructed. The implementations cover a wide range of problems in areas such as high energy and astrophysics, condensed matter, chemical reactions, plasma physics, applied mathematics, geophysics, simulation, CAD for VLSI, graphics and image processing. The products of the research program include the concurrent algorithms, hardware, systems software, and complete program implementations.

  16. Angular velocity of gravitational radiation from precessing binaries and the corotating frame

    NASA Astrophysics Data System (ADS)

    Boyle, Michael

    2013-05-01

    This paper defines an angular velocity for time-dependent functions on the sphere and applies it to gravitational waveforms from compact binaries. Because it is geometrically meaningful and has a clear physical motivation, the angular velocity is uniquely useful in helping to solve an important—and largely ignored—problem in models of compact binaries: the inverse problem of deducing the physical parameters of a system from the gravitational waves alone. It is also used to define the corotating frame of the waveform. When decomposed in this frame, the waveform has no rotational dynamics and is therefore as slowly evolving as possible. The resulting simplifications lead to straightforward methods for accurately comparing waveforms and constructing hybrids. As formulated in this paper, the methods can be applied robustly to both precessing and nonprecessing waveforms, providing a clear, comprehensive, and consistent framework for waveform analysis. Explicit implementations of all these methods are provided in accompanying computer code.

  17. An Ensemble Multilabel Classification for Disease Risk Prediction

    PubMed Central

    Liu, Wei; Zhao, Hongling; Zhang, Chaoyang

    2017-01-01

    It is important to identify and prevent disease risk as early as possible through regular physical examinations. We formulate the disease risk prediction into a multilabel classification problem. A novel Ensemble Label Power-set Pruned datasets Joint Decomposition (ELPPJD) method is proposed in this work. First, we transform the multilabel classification into a multiclass classification. Then, we propose the pruned datasets and joint decomposition methods to deal with the imbalance learning problem. Two strategies size balanced (SB) and label similarity (LS) are designed to decompose the training dataset. In the experiments, the dataset is from the real physical examination records. We contrast the performance of the ELPPJD method with two different decomposition strategies. Moreover, the comparison between ELPPJD and the classic multilabel classification methods RAkEL and HOMER is carried out. The experimental results show that the ELPPJD method with label similarity strategy has outstanding performance. PMID:29065647

  18. Decentralized control of large flexible structures by joint decoupling

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Juang, Jer-Nan

    1994-01-01

    This paper presents a novel method to design decentralized controllers for large complex flexible structures by using the idea of joint decoupling. Decoupling of joint degrees of freedom from the interior degrees of freedom is achieved by setting the joint actuator commands to cancel the internal forces exerting on the joint degrees of freedom. By doing so, the interactions between substructures are eliminated. The global structure control design problem is then decomposed into several substructure control design problems. Control commands for interior actuators are set to be localized state feedback using decentralized observers for state estimation. The proposed decentralized controllers can operate successfully at the individual substructure level as well as at the global structure level. Not only control design but also control implementation is decentralized. A two-component mass-spring-damper system is used as an example to demonstrate the proposed method.

  19. A decomposition approach to the design of a multiferroic memory bit

    NASA Astrophysics Data System (ADS)

    Acevedo, Ruben; Liang, Cheng-Yen; Carman, Gregory P.; Sepulveda, Abdon E.

    2017-06-01

    The objective of this paper is to present a methodology for the design of a memory bit to minimize the energy required to write data at the bit level. By straining a ferromagnetic nickel nano-dot by means of a piezoelectric substrate, its magnetization vector rotates between two stable states defined as a 1 and 0 for digital memory. The memory bit geometry, actuation mechanism and voltage control law were used as design variables. The approach used was to decompose the overall design process into simpler sub-problems whose structure can be exploited for a more efficient solution. This method minimizes the number of fully dynamic coupled finite element analyses required to converge to a near optimal design, thus decreasing the computational time for the design process. An in-plane sample design problem is presented to illustrate the advantages and flexibility of the procedure.

  20. Comparison of dual and single exposure techniques in dual-energy chest radiography.

    PubMed

    Ho, J T; Kruger, R A; Sorenson, J A

    1989-01-01

    Conventional chest radiography is the most effective tool for lung cancer detection and diagnosis; nevertheless, a high percentage of lung cancer tumors are missed because of the overlap of lung nodule image contrast with bone image contrast in a chest radiograph. Two different energy subtraction strategies, dual exposure and single exposure techniques, were studied for decomposing a radiograph into bone-free and soft tissue-free images to address this problem. For comparing the efficiency of these two techniques in lung nodule detection, the performances of the techniques were evaluated on the basis of residual tissue contrast, energy separation, and signal-to-noise ratio. The evaluation was based on both computer simulation and experimental verification. The dual exposure technique was found to be better than the single exposure technique because of its higher signal-to-noise ratio and greater residual tissue contrast. However, x-ray tube loading and patient motion are problems.

  1. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  2. Investigation of automated task learning, decomposition and scheduling

    NASA Technical Reports Server (NTRS)

    Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.

    1990-01-01

    The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.

  3. Network exploitation using WAMI tracks

    NASA Astrophysics Data System (ADS)

    Rimey, Ray; Record, Jim; Keefe, Dan; Kennedy, Levi; Cramer, Chris

    2011-06-01

    Creating and exploiting network models from wide area motion imagery (WAMI) is an important task for intelligence analysis. Tracks of entities observed moving in the WAMI sensor data are extracted, then large numbers of tracks are studied over long time intervals to determine specific locations that are visited (e.g., buildings in an urban environment), what locations are related to other locations, and the function of each location. This paper describes several parts of the network detection/exploitation problem, and summarizes a solution technique for each: (a) Detecting nodes; (b) Detecting links between known nodes; (c) Node attributes to characterize a node; (d) Link attributes to characterize each link; (e) Link structure inferred from node attributes and vice versa; and (f) Decomposing a detected network into smaller networks. Experimental results are presented for each solution technique, and those are used to discuss issues for each problem part and its solution technique.

  4. Extension of the frequency-domain pFFT method for wave structure interaction in finite depth

    NASA Astrophysics Data System (ADS)

    Teng, Bin; Song, Zhi-jie

    2017-06-01

    To analyze wave interaction with a large scale body in the frequency domain, a precorrected Fast Fourier Transform (pFFT) method has been proposed for infinite depth problems with the deep water Green function, as it can form a matrix with Toeplitz and Hankel properties. In this paper, a method is proposed to decompose the finite depth Green function into two terms, which can form matrices with the Toeplitz and a Hankel properties respectively. Then, a pFFT method for finite depth problems is developed. Based on the pFFT method, a numerical code pFFT-HOBEM is developed with the discretization of high order elements. The model is validated, and examinations on the computing efficiency and memory requirement of the new method have also been carried out. It shows that the new method has the same advantages as that for infinite depth.

  5. Diversity of Riparian Plants among and within Species Shapes River Communities

    PubMed Central

    Jackrel, Sara L.; Wootton, J. Timothy

    2015-01-01

    Organismal diversity among and within species may affect ecosystem function with effects transmitting across ecosystem boundaries. Whether recipient communities adjust their composition, in turn, to maximize their function in response to changes in donor composition at these two scales of diversity is unknown. We use small stream communities that rely on riparian subsidies as a model system. We used leaf pack experiments to ask how variation in plants growing beside streams in the Olympic Peninsula of Washington State, USA affects stream communities via leaf subsidies. Leaves from red alder (Alnus rubra), vine maple (Acer cinereus), bigleaf maple (Acer macrophyllum) and western hemlock (Tsuga heterophylla) were assembled in leaf packs to contrast low versus high diversity, and deployed in streams to compare local versus non-local leaf sources at the among and within species scales. Leaves from individuals within species decomposed at varying rates; most notably thin leaves decomposed rapidly. Among deciduous species, vine maple decomposed most rapidly, harbored the least algal abundance, and supported the greatest diversity of aquatic invertebrates, while bigleaf maple was at the opposite extreme for these three metrics. Recipient communities decomposed leaves from local species rapidly: leaves from early successional plants decomposed rapidly in stream reaches surrounded by early successional forest and leaves from later successional plants decomposed rapidly adjacent to later successional forest. The species diversity of leaves inconsistently affected decomposition, algal abundance and invertebrate metrics. Intraspecific diversity of leaf packs also did not affect decomposition or invertebrate diversity. However, locally sourced alder leaves decomposed more rapidly and harbored greater levels of algae than leaves sourced from conspecifics growing in other areas on the Olympic Peninsula, but did not harbor greater aquatic invertebrate diversity. In contrast to alder, local intraspecific differences via decomposition, algal or invertebrate metrics were not observed consistently among maples. These results emphasize that biodiversity of riparian subsidies at the within and across species scale have the potential to affect aquatic ecosystems, although there are complex species-specific effects. PMID:26539714

  6. Diversity of Riparian Plants among and within Species Shapes River Communities.

    PubMed

    Jackrel, Sara L; Wootton, J Timothy

    2015-01-01

    Organismal diversity among and within species may affect ecosystem function with effects transmitting across ecosystem boundaries. Whether recipient communities adjust their composition, in turn, to maximize their function in response to changes in donor composition at these two scales of diversity is unknown. We use small stream communities that rely on riparian subsidies as a model system. We used leaf pack experiments to ask how variation in plants growing beside streams in the Olympic Peninsula of Washington State, USA affects stream communities via leaf subsidies. Leaves from red alder (Alnus rubra), vine maple (Acer cinereus), bigleaf maple (Acer macrophyllum) and western hemlock (Tsuga heterophylla) were assembled in leaf packs to contrast low versus high diversity, and deployed in streams to compare local versus non-local leaf sources at the among and within species scales. Leaves from individuals within species decomposed at varying rates; most notably thin leaves decomposed rapidly. Among deciduous species, vine maple decomposed most rapidly, harbored the least algal abundance, and supported the greatest diversity of aquatic invertebrates, while bigleaf maple was at the opposite extreme for these three metrics. Recipient communities decomposed leaves from local species rapidly: leaves from early successional plants decomposed rapidly in stream reaches surrounded by early successional forest and leaves from later successional plants decomposed rapidly adjacent to later successional forest. The species diversity of leaves inconsistently affected decomposition, algal abundance and invertebrate metrics. Intraspecific diversity of leaf packs also did not affect decomposition or invertebrate diversity. However, locally sourced alder leaves decomposed more rapidly and harbored greater levels of algae than leaves sourced from conspecifics growing in other areas on the Olympic Peninsula, but did not harbor greater aquatic invertebrate diversity. In contrast to alder, local intraspecific differences via decomposition, algal or invertebrate metrics were not observed consistently among maples. These results emphasize that biodiversity of riparian subsidies at the within and across species scale have the potential to affect aquatic ecosystems, although there are complex species-specific effects.

  7. Asymmetrically dominated choice problems, the isolation hypothesis and random incentive mechanisms.

    PubMed

    Cox, James C; Sadiraj, Vjollca; Schmidt, Ulrich

    2014-01-01

    This paper presents an experimental study of the random incentive mechanisms which are a standard procedure in economic and psychological experiments. Random incentive mechanisms have several advantages but are incentive-compatible only if responses to the single tasks are independent. This is true if either the independence axiom of expected utility theory or the isolation hypothesis of prospect theory holds. We present a simple test of this in the context of choice under risk. In the baseline (one task) treatment we observe risk behavior in a given choice problem. We show that by integrating a second, asymmetrically dominated choice problem in a random incentive mechanism risk behavior can be manipulated systematically. This implies that the isolation hypothesis is violated and the random incentive mechanism does not elicit true preferences in our example.

  8. Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior

    PubMed Central

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741

  9. Thermodynamic method for generating random stress distributions on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  10. Ozone decomposing filter

    DOEpatents

    Simandl, Ronald F.; Brown, John D.; Whinnery, Jr., LeRoy L.

    1999-01-01

    In an improved ozone decomposing air filter carbon fibers are held together with a carbonized binder in a perforated structure. The structure is made by combining rayon fibers with gelatin, forming the mixture in a mold, freeze-drying, and vacuum baking.

  11. Reactive codoping of GaAlInP compound semiconductors

    DOEpatents

    Hanna, Mark Cooper [Boulder, CO; Reedy, Robert [Golden, CO

    2008-02-12

    A GaAlInP compound semiconductor and a method of producing a GaAlInP compound semiconductor are provided. The apparatus and method comprises a GaAs crystal substrate in a metal organic vapor deposition reactor. Al, Ga, In vapors are prepared by thermally decomposing organometallic compounds. P vapors are prepared by thermally decomposing phospine gas, group II vapors are prepared by thermally decomposing an organometallic group IIA or IIB compound. Group VIB vapors are prepared by thermally decomposing a gaseous compound of group VIB. The Al, Ga, In, P, group II, and group VIB vapors grow a GaAlInP crystal doped with group IIA or IIB and group VIB elements on the substrate wherein the group IIA or IIB and a group VIB vapors produced a codoped GaAlInP compound semiconductor with a group IIA or IIB element serving as a p-type dopant having low group II atomic diffusion.

  12. sdg interacting-boson model in the SU(3) scheme and its application to 168Er

    NASA Astrophysics Data System (ADS)

    Yoshinaga, N.; Akiyama, Y.; Arima, A.

    1988-07-01

    The sdg interacting-boson model is presented in the SU(3) tensor formalism. The interactions are decomposed according to their SU(3) tensor character. The existence of the SU(3)-seniority preserving operator is found to be important. The model is applied to 168Er. Energy levels and electromagnetic transitions are calculated. This model is shown to solve the problem of anharmonicity regarding the excitation energy of the first Kπ=4+ band relative to that of the first Kπ=2+ one. E4 transitions are calculated to give different predictions from those by the quasiparticle-phonon nuclear model.

  13. Magnetic resonance imaging as a tool for extravehicular activity analysis

    NASA Technical Reports Server (NTRS)

    Dickenson, R.; Lorenz, C.; Peterson, S.; Strauss, A.; Main, J.

    1992-01-01

    The purpose of this research is to examine the value of magnetic resonance imaging (MRI) as a means of conducting kinematic studies of the hand for the purpose of EVA capability enhancement. After imaging the subject hand using a magnetic resonance scanner, the resulting 2D slices were reconstructed into a 3D model of the proximal phalanx of the left hand. Using the coordinates of several landmark positions, one is then able to decompose the motion of the rigid body. MRI offers highly accurate measurements due to its tomographic nature without the problems associated with other imaging modalities for in vivo studies.

  14. A Systematic Methodology for Verifying Superscalar Microprocessors

    NASA Technical Reports Server (NTRS)

    Srivas, Mandayam; Hosabettu, Ravi; Gopalakrishnan, Ganesh

    1999-01-01

    We present a systematic approach to decompose and incrementally build the proof of correctness of pipelined microprocessors. The central idea is to construct the abstraction function by using completion functions, one per unfinished instruction, each of which specifies the effect (on the observables) of completing the instruction. In addition to avoiding the term size and case explosion problem that limits the pure flushing approach, our method helps localize errors, and also handles stages with interactive loops. The technique is illustrated on pipelined and superscalar pipelined implementations of a subset of the DLX architecture. It has also been applied to a processor with out-of-order execution.

  15. On the strain energy of laminated composite plates

    NASA Technical Reports Server (NTRS)

    Atilgan, Ali R.; Hodges, Dewey H.

    1991-01-01

    The present effort to obtain the asymptotically correct form of the strain energy in inhomogeneous laminated composite plates proceeds from the geometrically nonlinear elastic theory-based three-dimensional strain energy by decomposing the nonlinear three-dimensional problem into a linear, through-the-thickness analysis and a nonlinear, two-dimensional analysis analyzing plate formation. Attention is given to the case in which each lamina exhibits material symmetry about its middle surface, deriving closed-form analytical expressions for the plate elastic constants and the displacement and strain distributions through the plate's thickness. Despite the simplicity of the plate strain energy's form, there are no restrictions on the magnitudes of displacement and rotation measures.

  16. Inverse random source scattering for the Helmholtz equation in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Li, Ming; Chen, Chuchu; Li, Peijun

    2018-01-01

    This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  17. What Does a Random Line Look Like: An Experimental Study

    ERIC Educational Resources Information Center

    Turner, Nigel E.; Liu, Eleanor; Toneatto, Tony

    2011-01-01

    The study examined the perception of random lines by people with gambling problems compared to people without gambling problems. The sample consisted of 67 probable pathological gamblers and 46 people without gambling problems. Participants completed a number of questionnaires about their gambling and were then presented with a series of random…

  18. Competitive Facility Location with Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2009-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.

  19. Estimate of fine root production including the impact of decomposed roots in a Bornean tropical rainforest

    NASA Astrophysics Data System (ADS)

    Katayama, Ayumi; Khoon Koh, Lip; Kume, Tomonori; Makita, Naoki; Matsumoto, Kazuho; Ohashi, Mizue

    2016-04-01

    Considerable carbon is allocated belowground and used for respiration and production of roots. It is reported that approximately 40 % of GPP is allocated belowground in a Bornean tropical rainforest, which is much higher than those in Neotropical rainforests. This may be caused by high root production in this forest. Ingrowth core is a popular method for estimating fine root production, but recent study by Osawa et al. (2012) showed potential underestimates of this method because of the lack of consideration of the impact of decomposed roots. It is important to estimate fine root production with consideration for the decomposed roots, especially in tropics where decomposition rate is higher than other regions. Therefore, objective of this study is to estimate fine root production with consideration of decomposed roots using ingrowth cores and root litter-bag in the tropical rainforest. The study was conducted in Lambir Hills National Park in Borneo. Ingrowth cores and litter bags for fine roots were buried in March 2013. Eighteen ingrowth cores and 27 litter bags were collected in May, September 2013, March 2014 and March 2015, respectively. Fine root production was comparable to aboveground biomass increment and litterfall amount, and accounted only 10% of GPP in this study site, suggesting most of the carbon allocated to belowground might be used for other purposes. Fine root production was comparable to those in Neotropics. Decomposed roots accounted for 18% of fine root production. This result suggests that no consideration of decomposed fine roots may cause underestimate of fine root production.

  20. Cascading effects of induced terrestrial plant defences on aquatic and terrestrial ecosystem function

    PubMed Central

    Jackrel, Sara L.; Wootton, J. Timothy

    2015-01-01

    Herbivores induce plants to undergo diverse processes that minimize costs to the plant, such as producing defences to deter herbivory or reallocating limited resources to inaccessible portions of the plant. Yet most plant tissue is consumed by decomposers, not herbivores, and these defensive processes aimed to deter herbivores may alter plant tissue even after detachment from the plant. All consumers value nutrients, but plants also require these nutrients for primary functions and defensive processes. We experimentally simulated herbivory with and without nutrient additions on red alder (Alnus rubra), which supplies the majority of leaf litter for many rivers in western North America. Simulated herbivory induced a defence response with cascading effects: terrestrial herbivores and aquatic decomposers fed less on leaves from stressed trees. This effect was context dependent: leaves from fertilized-only trees decomposed most rapidly while leaves from fertilized trees receiving the herbivory treatment decomposed least, suggesting plants funnelled a nutritionally valuable resource into enhanced defence. One component of the defence response was a decrease in leaf nitrogen leading to elevated carbon : nitrogen. Aquatic decomposers prefer leaves naturally low in C : N and this altered nutrient profile largely explains the lower rate of aquatic decomposition. Furthermore, terrestrial soil decomposers were unaffected by either treatment but did show a preference for local and nitrogen-rich leaves. Our study illustrates the ecological implications of terrestrial herbivory and these findings demonstrate that the effects of selection caused by terrestrial herbivory in one ecosystem can indirectly shape the structure of other ecosystems through ecological fluxes across boundaries. PMID:25788602

  1. A catalog of polychromatic bulge-disc decompositions of ˜17.600 galaxies in CANDELS

    NASA Astrophysics Data System (ADS)

    Dimauro, Paola; Huertas-Company, Marc; Daddi, Emanuele; Pérez-González, Pablo G.; Bernardi, Mariangela; Barro, Guillermo; Buitrago, Fernando; Caro, Fernando; Cattaneo, Andrea; Dominguez-Sánchez, Helena; Faber, Sandra M.; Häußler, Boris; Kocevski, Dale D.; Koekemoer, Anton M.; Koo, David C.; Lee, Christoph T.; Mei, Simona; Margalef-Bentabol, Berta; Primack, Joel; Rodriguez-Puebla, Aldo; Salvato, Mara; Shankar, Francesco; Tuccillo, Diego

    2018-05-01

    Understanding how bulges grow in galaxies is critical step towards unveiling the link between galaxy morphology and star-formation. To do so, it is necessary to decompose large sample of galaxies at different epochs into their main components (bulges and discs). This is particularly challenging, especially at high redshifts, where galaxies are poorly resolved. This work presents a catalog of bulge-disc decompositions of the surface brightness profiles of ˜17.600 H-band selected galaxies in the CANDELS fields (F160W < 23, 0 < z < 2) in 4 to 7 filters covering a spectral range of 430 - 1600nm. This is the largest available catalog of this kind up to z = 2. By using a novel approach based on deep-learning to select the best model to fit, we manage to control systematics arising from wrong model selection and obtain less contaminated samples than previous works. We show that the derived structural properties are within ˜10 - 20% of random uncertainties. We then fit stellar population models to the decomposed SEDs (Spectral Energy Distribution) of bulges and discs and derive stellar masses (and stellar mass bulge-to-total ratios) as well as rest-frame colors (U,V,J) for bulges and discs separately. All data products are publicly released with this paper and through the web page https://lerma.obspm.fr/huertas/form_CANDELS and will be used for scientific analysis in forthcoming works.

  2. The capital-asset-pricing model and arbitrage pricing theory: A unification

    PubMed Central

    Khan, M. Ali; Sun, Yeneng

    1997-01-01

    We present a model of a financial market in which naive diversification, based simply on portfolio size and obtained as a consequence of the law of large numbers, is distinguished from efficient diversification, based on mean-variance analysis. This distinction yields a valuation formula involving only the essential risk embodied in an asset’s return, where the overall risk can be decomposed into a systematic and an unsystematic part, as in the arbitrage pricing theory; and the systematic component further decomposed into an essential and an inessential part, as in the capital-asset-pricing model. The two theories are thus unified, and their individual asset-pricing formulas shown to be equivalent to the pervasive economic principle of no arbitrage. The factors in the model are endogenously chosen by a procedure analogous to the Karhunen–Loéve expansion of continuous time stochastic processes; it has an optimality property justifying the use of a relatively small number of them to describe the underlying correlational structures. Our idealized limit model is based on a continuum of assets indexed by a hyperfinite Loeb measure space, and it is asymptotically implementable in a setting with a large but finite number of assets. Because the difficulties in the formulation of the law of large numbers with a standard continuum of random variables are well known, the model uncovers some basic phenomena not amenable to classical methods, and whose approximate counterparts are not already, or even readily, apparent in the asymptotic setting. PMID:11038614

  3. The capital-asset-pricing model and arbitrage pricing theory: a unification.

    PubMed

    Ali Khan, M; Sun, Y

    1997-04-15

    We present a model of a financial market in which naive diversification, based simply on portfolio size and obtained as a consequence of the law of large numbers, is distinguished from efficient diversification, based on mean-variance analysis. This distinction yields a valuation formula involving only the essential risk embodied in an asset's return, where the overall risk can be decomposed into a systematic and an unsystematic part, as in the arbitrage pricing theory; and the systematic component further decomposed into an essential and an inessential part, as in the capital-asset-pricing model. The two theories are thus unified, and their individual asset-pricing formulas shown to be equivalent to the pervasive economic principle of no arbitrage. The factors in the model are endogenously chosen by a procedure analogous to the Karhunen-Loéve expansion of continuous time stochastic processes; it has an optimality property justifying the use of a relatively small number of them to describe the underlying correlational structures. Our idealized limit model is based on a continuum of assets indexed by a hyperfinite Loeb measure space, and it is asymptotically implementable in a setting with a large but finite number of assets. Because the difficulties in the formulation of the law of large numbers with a standard continuum of random variables are well known, the model uncovers some basic phenomena not amenable to classical methods, and whose approximate counterparts are not already, or even readily, apparent in the asymptotic setting.

  4. Flawed foundations of associationism? Comments on Machado and Silva (2007).

    PubMed

    Gallistel, C R

    2007-10-01

    A. Machado and F. J. Silva have spotted an important conceptual problem in scalar expectancy theory's account of the 2-standard-interval time-left experiment. C. R. Gallistel and J. Gibbon (2000) were aware of it but did not discuss it for historical and sociological reasons, owned up to in this article. A problem of broader significance for psychology, cognitive science, neuroscience, and the philosophy of mind concerns the closely related concepts of a trial and of temporal pairing, which are foundational in associative theories of learning and memory. Association formation is assumed to depend on the temporal pairing of the to-be-associated events. In modeling it, theorists have assumed continuous time to be decomposable into trials. But life is not composed of trials, and attempts to specify the conditions under which two events may be regarded as temporally paired have never succeeded. Thus, associative theories of learning and memory are built on conceptual sand. Undeterred, neuroscientists have defined the neurobiology-of-memory problem as the problem of determining the cellular and molecular mechanism of association formation, and connectionist modelers have made it a cornerstone of their efforts. More conceptual analysis is indeed needed. Copyright 2007 APA, all rights reserved.

  5. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    PubMed

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  6. Knowledge-based approach to system integration

    NASA Technical Reports Server (NTRS)

    Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.

    1988-01-01

    To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.

  7. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  8. PEROXIDE DESTRUCTION TESTING FOR THE 200 AREA EFFLUENT TREATMENT FACILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HALGREN DL

    2010-03-12

    The hydrogen peroxide decomposer columns at the 200 Area Effluent Treatment Facility (ETF) have been taken out of service due to ongoing problems with particulate fines and poor destruction performance from the granular activated carbon (GAC) used in the columns. An alternative search was initiated and led to bench scale testing and then pilot scale testing. Based on the bench scale testing three manganese dioxide based catalysts were evaluated in the peroxide destruction pilot column installed at the 300 Area Treated Effluent Disposal Facility. The ten inch diameter, nine foot tall, clear polyvinyl chloride (PVC) column allowed for the samemore » six foot catalyst bed depth as is in the existing ETF system. The flow rate to the column was controlled to evaluate the performance at the same superficial velocity (gpm/ft{sup 2}) as the full scale design flow and normal process flow. Each catalyst was evaluated on peroxide destruction performance and particulate fines capacity and carryover. Peroxide destruction was measured by hydrogen peroxide concentration analysis of samples taken before and after the column. The presence of fines in the column headspace and the discharge from carryover was generally assessed by visual observation. All three catalysts met the peroxide destruction criteria by achieving hydrogen peroxide discharge concentrations of less than 0.5 mg/L at the design flow with inlet peroxide concentrations greater than 100 mg/L. The Sud-Chemie T-2525 catalyst was markedly better in the minimization of fines and particle carryover. It is anticipated the T-2525 can be installed as a direct replacement for the GAC in the peroxide decomposer columns. Based on the results of the peroxide method development work the recommendation is to purchase the T-2525 catalyst and initially load one of the ETF decomposer columns for full scale testing.« less

  9. Vulnerability assessment of urban ecosystems driven by water resources, human health and atmospheric environment

    NASA Astrophysics Data System (ADS)

    Shen, Jing; Lu, Hongwei; Zhang, Yang; Song, Xinshuang; He, Li

    2016-05-01

    As ecosystem management is a hotspot and urgent topic with increasing population growth and resource depletion. This paper develops an urban ecosystem vulnerability assessment method representing a new vulnerability paradigm for decision makers and environmental managers, as it's an early warning system to identify and prioritize the undesirable environmental changes in terms of natural, human, economic and social elements. The whole idea is to decompose a complex problem into sub-problem, and analyze each sub-problem, and then aggregate all sub-problems to solve this problem. This method integrates spatial context of Geographic Information System (GIS) tool, multi-criteria decision analysis (MCDA) method, ordered weighted averaging (OWA) operators, and socio-economic elements. Decision makers can find out relevant urban ecosystem vulnerability assessment results with different vulnerable attitude. To test the potential of the vulnerability methodology, it has been applied to a case study area in Beijing, China, where it proved to be reliable and consistent with the Beijing City Master Plan. The results of urban ecosystem vulnerability assessment can support decision makers in evaluating the necessary of taking specific measures to preserve the quality of human health and environmental stressors for a city or multiple cities, with identifying the implications and consequences of their decisions.

  10. Model reduction method using variable-separation for stochastic saddle point problems

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  11. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  12. Scalable Domain Decomposed Monte Carlo Particle Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  13. Decomposers and the fire cycle in a phryganic (East Mediterranean) ecosystem.

    PubMed

    Arianoutsou-Faraggitaki, M; Margaris, N S

    1982-06-01

    Dehydrogenase activity, cellulose decomposition, nitrification, and CO2 release were measured for 2 years to estimate the effects of a wildfire over a phryganic ecosystem. In decomposers' subsystem we found that fire mainly affected the nitrification process during the whole period, and soil respiration for the second post-fire year, when compared with the control site. Our data suggest that after 3-4 months the activity of microbial decomposers is almost the same at the two sites, suggesting that fire is not a catastrophic event, but a simple perturbation common to Mediterranean-type ecosystems.

  14. Numerical analysis on effect of aspect ratio of planar solid oxide fuel cell fueled with decomposed ammonia

    NASA Astrophysics Data System (ADS)

    Tan, Wee Choon; Iwai, Hiroshi; Kishimoto, Masashi; Brus, Grzegorz; Szmyd, Janusz S.; Yoshida, Hideo

    2018-04-01

    Planar solid oxide fuel cells (SOFCs) with decomposed ammonia are numerically studied to investigate the effect of the cell aspect ratio. The ammonia decomposer is assumed to be located next to the SOFCs, and the heat required for the endothermic decomposition reaction is supplied by the thermal radiation from the SOFCs. Cells with aspect ratios (ratios of the streamwise length to the spanwise width) between 0.130 and 7.68 are provided with the reactants at a constant mass flow rate. A parametric study is conducted by varying the cell temperature and fuel utility factor to investigate their effects on the cell performance in terms of the voltage efficiency. The effect of the heat supply to the ammonia decomposer is also studied. The developed model shows good agreement, in terms of the current-voltage curve, with the experimental data obtained from a short stack without parameter tuning. The simulation study reveals that the cell with the highest aspect ratio achieves the highest performance under furnace operation. On the other hand, the 0.750 aspect ratio cell with the highest voltage efficiency of 0.67 is capable of thermally sustaining the ammonia decomposers at a fuel utility of 0.80 using the thermal radiation from both sidewalls.

  15. C, N and P fertilization in an Amazonian rainforest supports stoichiometric dissimilarity as a driver of litter diversity effects on decomposition

    PubMed Central

    Barantal, Sandra; Schimann, Heidy; Fromin, Nathalie; Hättenschwiler, Stephan

    2014-01-01

    Plant leaf litter generally decomposes faster as a group of different species than when individual species decompose alone, but underlying mechanisms of these diversity effects remain poorly understood. Because resource C : N : P stoichiometry (i.e. the ratios of these key elements) exhibits strong control on consumers, we supposed that stoichiometric dissimilarity of litter mixtures (i.e. the divergence in C : N : P ratios among species) improves resource complementarity to decomposers leading to faster mixture decomposition. We tested this hypothesis with: (i) a wide range of leaf litter mixtures of neotropical tree species varying in C : N : P dissimilarity, and (ii) a nutrient addition experiment (C, N and P) to create stoichiometric similarity. Litter mixtures decomposed in the field using two different types of litterbags allowing or preventing access to soil fauna. Litter mixture mass loss was higher than expected from species decomposing singly, especially in presence of soil fauna. With fauna, synergistic litter mixture effects increased with increasing stoichiometric dissimilarity of litter mixtures and this positive relationship disappeared with fertilizer addition. Our results indicate that litter stoichiometric dissimilarity drives mixture effects via the nutritional requirements of soil fauna. Incorporating ecological stoichiometry in biodiversity research allows refinement of the underlying mechanisms of how changing biodiversity affects ecosystem functioning. PMID:25320173

  16. Decomposition by ectomycorrhizal fungi alters soil carbon storage in a simulation model

    DOE PAGES

    Moore, J. A. M.; Jiang, J.; Post, W. M.; ...

    2015-03-06

    Carbon cycle models often lack explicit belowground organism activity, yet belowground organisms regulate carbon storage and release in soil. Ectomycorrhizal fungi are important players in the carbon cycle because they are a conduit into soil for carbon assimilated by the plant. It is hypothesized that ectomycorrhizal fungi can also be active decomposers when plant carbon allocation to fungi is low. Here, we reviewed the literature on ectomycorrhizal decomposition and we developed a simulation model of the plant-mycorrhizae interaction where a reduction in plant productivity stimulates ectomycorrhizal fungi to decompose soil organic matter. Our review highlights evidence demonstrating the potential formore » ectomycorrhizal fungi to decompose soil organic matter. Our model output suggests that ectomycorrhizal activity accounts for a portion of carbon decomposed in soil, but this portion varied with plant productivity and the mycorrhizal carbon uptake strategy simulated. Lower organic matter inputs to soil were largely responsible for reduced soil carbon storage. Using mathematical theory, we demonstrated that biotic interactions affect predictions of ecosystem functions. Specifically, we developed a simple function to model the mycorrhizal switch in function from plant symbiont to decomposer. In conclusion, we show that including mycorrhizal fungi with the flexibility of mutualistic and saprotrophic lifestyles alters predictions of ecosystem function.« less

  17. A Randomized Trial of Brief Interventions for Problem and Pathological Gamblers

    ERIC Educational Resources Information Center

    Petry, Nancy M.; Weinstock, Jeremiah; Ledgerwood, David M.; Morasco, Benjamin

    2008-01-01

    Limited research exists regarding methods for reducing problem gambling. Problem gamblers (N = 180) were randomly assigned to assessment only control, 10 min of brief advice, 1 session of motivational enhancement therapy (MET), or 1 session of MET plus 3 sessions of cognitive-behavioral therapy. Gambling was assessed at baseline, at 6 weeks, and…

  18. A Model for Predicting Behavioural Sleep Problems in a Random Sample of Australian Pre-Schoolers

    ERIC Educational Resources Information Center

    Hall, Wendy A.; Zubrick, Stephen R.; Silburn, Sven R.; Parsons, Deborah E.; Kurinczuk, Jennifer J.

    2007-01-01

    Behavioural sleep problems (childhood insomnias) can cause distress for both parents and children. This paper reports a model describing predictors of high sleep problem scores in a representative population-based random sample survey of non-Aboriginal singleton children born in 1995 and 1996 (1085 girls and 1129 boys) in Western Australia.…

  19. Random Walk Method for Potential Problems

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Raju, I. S.

    2002-01-01

    A local Random Walk Method (RWM) for potential problems governed by Lapalace's and Paragon's equations is developed for two- and three-dimensional problems. The RWM is implemented and demonstrated in a multiprocessor parallel environment on a Beowulf cluster of computers. A speed gain of 16 is achieved as the number of processors is increased from 1 to 23.

  20. Life skills, mathematical reasoning and critical thinking: a curriculum for the prevention of problem gambling.

    PubMed

    Turner, Nigel E; Macdonald, John; Somerset, Matthew

    2008-09-01

    Previous studies have shown that youth are two to three times more likely than adults to report gambling related problems. This paper reports on the development and pilot evaluation of a school-based problem gambling prevention curriculum. The prevention program focused on problem gambling awareness and self-monitoring skills, coping skills, and knowledge of the nature of random events. The results of a controlled experiment evaluating the students learning from the program are reported. We found significant improvement in the students' knowledge of random events, knowledge of problem gambling awareness and self-monitoring, and knowledge of coping skills. The results suggest that knowledge based material on random events, problem gambling awareness and self-monitoring skills, and coping skills can be taught. Future development of the curriculum will focus on content to expand the students' coping skill options.

  1. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  2. Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods

    PubMed Central

    Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar

    2018-01-01

    Background: Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. Methods: The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. Results: The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. Conclusion: This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. PMID:29325403

  3. Slow decomposition of lower order roots: a key mechanism of root carbon and nutrient retention in the soil.

    PubMed

    Fan, Pingping; Guo, Dali

    2010-06-01

    Among tree fine roots, the distal small-diameter lateral branches comprising first- and second-order roots lack secondary (wood) development. Therefore, these roots are expected to decompose more rapidly than higher order woody roots. But this prediction has not been tested and may not be correct. Current evidence suggests that lower order roots may decompose more slowly than higher order roots in tree species associated with ectomycorrhizal (EM) fungi because they are preferentially colonized by fungi and encased by a fungal sheath rich in chitin (a recalcitrant compound). In trees associated with arbuscular mycorrhizal (AM) fungi, lower order roots do not form fungal sheaths, but they may have poorer C quality, e.g. lower concentrations of soluble carbohydrates and higher concentrations of acid-insolubles than higher order roots, thus may decompose more slowly. In addition, litter with high concentrations of acid insolubles decomposes more slowly under higher N concentrations (such as lower order roots). Therefore, we propose that in both AM and EM trees, lower order roots decompose more slowly than higher order roots due to the combination of poor C quality and high N concentrations. To test this hypothesis, we examined decomposition of the first six root orders in Fraxinus mandshurica (an AM species) and Larix gmelinii (an EM species) using litterbag method in northeastern China. We found that lower order roots of both species decomposed more slowly than higher order roots, and this pattern appears to be associated mainly with initial C quality and N concentrations. Because these lower order roots have short life spans and thus dominate root mortality, their slow decomposition implies that a substantial fraction of the stable soil organic matter pool is derived from these lower order roots, at least in the two species we studied.

  4. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  5. The median problems on linear multichromosomal genomes: graph representation and fast exact solutions.

    PubMed

    Xu, Andrew Wei

    2010-09-01

    In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .

  6. Randomized Controlled Trial of Problem-Solving Therapy for Minor Depression in Home Care

    ERIC Educational Resources Information Center

    Gellis, Zvi D.; McGinty, Jean; Tierney, Lynda; Jordan, Cindy; Burton, Jean; Misener, Elizabeth

    2008-01-01

    Objective: Data are presented from a pilot research program initiated to develop, refine, and test the outcomes of problem-solving therapy that targets the needs of older adults with minor depression in home care settings. Method: A pilot randomized clinical trial compares the impact of problem-solving therapy for home care to treatment as usual…

  7. Reducing Conduct Problems among Children Exposed to Intimate Partner Violence: A Randomized Clinical Trial Examining Effects of Project Support

    ERIC Educational Resources Information Center

    Jouriles, Ernest N.; McDonald, Renee; Rosenfield, David; Stephens, Nanette; Corbitt-Shindler, Deborah; Miller, Pamela C.

    2009-01-01

    This study was a randomized clinical trial of Project Support, an intervention designed to reduce conduct problems among children exposed to intimate partner violence. Participants were 66 families (mothers and children) with at least 1 child exhibiting clinical levels of conduct problems. Families were recruited from domestic violence shelters.…

  8. Network problem threshold

    NASA Technical Reports Server (NTRS)

    Gejji, Raghvendra, R.

    1992-01-01

    Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.

  9. Randomized Trial of Treatment for Children with Sexual Behavior Problems: Ten-Year Follow-Up

    ERIC Educational Resources Information Center

    Carpentier, Melissa Y.; Silovsky, Jane F.; Chaffin, Mark

    2006-01-01

    This study prospectively follows 135 children 5-12 years of age with sexual behavior problems from a randomized trial comparing a 12-session group cognitive-behavioral therapy (CBT) with group play therapy and follows 156 general clinic children with nonsexual behavior problems. Ten-year follow-up data on future juvenile and adult arrests and…

  10. The FPase properties and morphology changes of a cellulolytic bacterium, Sporocytophaga sp. JL-01, on decomposing filter paper cellulose.

    PubMed

    Wang, Xiuran; Peng, Zhongqi; Sun, Xiaoling; Liu, Dongbo; Chen, Shan; Li, Fan; Xia, Hongmei; Lu, Tiancheng

    2012-01-01

    Sporocytophaga sp. JL-01 is a sliding cellulose degrading bacterium that can decompose filter paper (FP), carboxymethyl cellulose (CMC) and cellulose CF11. In this paper, the morphological characteristics of S. sp. JL-01 growing in FP liquid medium was studied by Scanning Electron Microscope (SEM), and one of the FPase components of this bacterium was analyzed. The results showed that the cell shapes were variable during the process of filter paper cellulose decomposition and the rod shape might be connected with filter paper decomposing. After incubating for 120 h, the filter paper was decomposed significantly, and it was degraded absolutely within 144 h. An FPase1 was purified from the supernatant and its characteristics were analyzed. The molecular weight of the FPase1 was 55 kDa. The optimum pH was pH 7.2 and optimum temperature was 50°C under experiment conditions. Zn(2+) and Co(2+) enhanced the enzyme activity, but Fe(3+) inhibited it.

  11. Recourse-based facility-location problems in hybrid uncertain environment.

    PubMed

    Wang, Shuming; Watada, Junzo; Pedrycz, Witold

    2010-08-01

    The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.

  12. A random utility based estimation framework for the household activity pattern problem.

    DOT National Transportation Integrated Search

    2016-06-01

    This paper develops a random utility based estimation framework for the Household Activity : Pattern Problem (HAPP). Based on the realization that output of complex activity-travel decisions : form a continuous pattern in space-time dimension, the es...

  13. Efficient co-conversion process of chicken manure into protein feed and organic fertilizer by Hermetia illucens L. (Diptera: Stratiomyidae) larvae and functional bacteria.

    PubMed

    Xiao, Xiaopeng; Mazza, Lorenzo; Yu, Yongqiang; Cai, Minmin; Zheng, Longyu; Tomberlin, Jeffery K; Yu, Jeffrey; van Huis, Arnold; Yu, Ziniu; Fasulo, Salvatore; Zhang, Jibin

    2018-07-01

    A chicken manure management process was carried out through co-conversion of Hermetia illucens L. larvae (BSFL) with functional bacteria for producing larvae as feed stuff and organic fertilizer. Thirteen days co-conversion of 1000 kg of chicken manure inoculated with one million 6-day-old BSFL and 10 9  CFU Bacillus subtilis BSF-CL produced aging larvae, followed by eleven days of aerobic fermentation inoculated with the decomposing agent to maturity. 93.2 kg of fresh larvae were harvested from the B. subtilis BSF-CL-inoculated group, while the control group only harvested 80.4 kg of fresh larvae. Chicken manure reduction rate of the B. subtilis BSF-CL-inoculated group was 40.5%, while chicken manure reduction rate of the control group was 35.8%. The weight of BSFL increased by 15.9%, BSFL conversion rate increased by 12.7%, and chicken manure reduction rate increased by 13.4% compared to the control (no B. subtilis BSF-CL). The residue inoculated with decomposing agent had higher maturity (germination index >92%), compared with the no decomposing agent group (germination index ∼86%). The activity patterns of different enzymes further indicated that its production was more mature and stable than that of the no decomposing agent group. Physical and chemical production parameters showed that the residue inoculated with the decomposing agent was more suitable for organic fertilizer than the no decomposing agent group. Both, the co-conversion of chicken manure by BSFL with its synergistic bacteria and the aerobic fermentation with the decomposing agent required only 24 days. The results demonstrate that co-conversion process could shorten the processing time of chicken manure compared to traditional compost process. Gut bacteria could enhance manure conversion and manure reduction. We established efficient manure co-conversion process by black soldier fly and bacteria and harvest high value-added larvae mass and biofertilizer. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Community structure and estimated contribution of primary consumers (Nematodes and Copepods) of decomposing plant litter (Juncus roemerianus and Rhizophora mangle) in South Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fell, J.W.; Cefalu, R.

    1984-01-01

    The paper discusses the meiofauna associated with decomposing leaf litter from two species of coastal marshland plants: the black needle rush, Juncus roemerianus and the red mangrove, Rhizophora mangle. The following aspects were investigated: (1) types of meiofauna present, especially nematodes; (2) changes in meiofaunal community structures with regard to season, station location, and type of plant litter; (3) amount of nematode and copepod biomass present on the decomposing plant litter; and (4) an estimation of the possible role of the nematodes in the decomposition process. 28 references, 5 figures, 9 tables. (ACR)

  15. Catalytic cartridge SO.sub.3 decomposer

    DOEpatents

    Galloway, Terry R.

    1982-01-01

    A catalytic cartridge internally heated is utilized as a SO.sub.3 decomposer for thermochemical hydrogen production. The cartridge has two embodiments, a cross-flow cartridge and an axial flow cartridge. In the cross-flow cartridge, SO.sub.3 gas is flowed through a chamber and incident normally to a catalyst coated tube extending through the chamber, the catalyst coated tube being internally heated. In the axial-flow cartridge, SO.sub.3 gas is flowed through the annular space between concentric inner and outer cylindrical walls, the inner cylindrical wall being coated by a catalyst and being internally heated. The modular cartridge decomposer provides high thermal efficiency, high conversion efficiency, and increased safety.

  16. A discrimination-association model for decomposing component processes of the implicit association test.

    PubMed

    Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale

    2013-06-01

    A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.

  17. Thermally Regenerative Battery with Intercalatable Electrodes and Selective Heating Means

    NASA Technical Reports Server (NTRS)

    Sharma, Pramod K. (Inventor); Narayanan, Sekharipuram R. (Inventor); Hickey, Gregory S. (Inventor)

    2000-01-01

    The battery contains at least one electrode such as graphite that intercalates a first species from the electrolyte disposed in a first compartment such as bromine to form a thermally decomposable complex during discharge. The other electrode can also be graphite which supplies another species such as lithium to the electrolyte in a second electrode compartment. The thermally decomposable complex is stable at room temperature but decomposes at elevated temperatures such as 50 C. to 150 C. The electrode compartments are separated by a selective ion permeable membrane that is impermeable to the first species. Charging is effected by selectively heating the first electrode.

  18. Draft Genome Sequence of the Lignocellulose Decomposer Thermobifida fusca Strain TM51.

    PubMed

    Tóth, Akos; Barna, Terézia; Nagy, István; Horváth, Balázs; Nagy, István; Táncsics, András; Kriszt, Balázs; Baka, Erzsébet; Fekete, Csaba; Kukolya, József

    2013-07-11

    Here, we present the complete genome sequence of Thermobifida fusca strain TM51, which was isolated from the hot upper layer of a compost pile in Hungary. T. fusca TM51 is a thermotolerant, aerobic actinomycete with outstanding lignocellulose-decomposing activity.

  19. Ecosystem and decomposer effects on litter dynamics along an old field to old-growth forest successional gradient

    EPA Science Inventory

    Identifying the biotic (e.g. decomposers, vegetation) and abiotic (e.g. temperature, moisture) mechanisms controlling litter decomposition is key to understanding ecosystem function, especially where variation in ecosystem structure due to successional processes may alter the str...

  20. [Water-holding characteristics and accumulation amount of the litters under main forest types in Xinglong Mountain of Gansu, Northwest China].

    PubMed

    Wei, Qiang; Ling, Lei; Zhang, Guang-zhong; Yan, Pei-bin; Tao, Ji-xin; Chai, Chun-shan; Xue, Rui

    2011-10-01

    By the methods of field survey and laboratory soaking extraction, an investigation was conducted on the accumulation amount, water-holding capacity, water-holding rate, and water-absorption rate of the litters under six main forests (Picea wilsonii forest, P. wilsonii - Betula platyphlla forest, Populus davidiana - B. platyphlla forest, Cotonester multiglorus - Rosa xanthina shrubs, Pinus tabulaeformis forest, and Larix principis-rupprechtii forest) in Xinglong Mountain of Gansu. The accumulation amount of the litters under the forests was 13.40-46.32 t hm(-2), and in the order of P. tabulaeformis forest > P. wilsonii - B. platyphlla forest > L. principis-rupprechtii forest > P. wilsonii forest > C. multiglorus-R. xanthina shrubs > P. davidiana - B. platyphlla forest. The litter storage of coniferous forests was greater than that of broadleaved forests, and the storage percentage of semi-decomposed litters was all higher than that of un-decomposed litters. The maximum water-holding rate of the litters was 185.5%-303.6%, being the highest for L. principis-rupprechtii forest and the lowest for P. tabulaeformis forest. The litters' water-holding capacity changed logarithmically with their soaking time. For coniferous forests, un-decomposed litters had a lower water-holding rate than semi-decomposed litters; whereas for broadleaved forests, it was in adverse. The maximum water-holding capacity of the litters varied from 3.94 mm to 8.59 mm, and was in the order of P. tabulaeformis forest > L. principis-rupprechtii forest > P. wilsonii - B. platyphlla forest > P. wilsonii forest > C. multiglorus - R. xanthina shrubs > P. davidiana - B. platyphlla forest. The litters' water-holding capacity also changed logarithmically with immersing time, and the half-decomposed litters had a larger water-holding capacity than un-decomposed litters. The water-absorption rate of the litters presented a power function with immersing time. Within the first one hour of immersed in water, the water-absorption rate of the litters declined linearly; after the first one hour, the litters' water-absorption rate became smaller, and changed slowly at different immersed stages. Semi-decomposed litters had a higher water-absorption rate than un-decomposed litters. The effective retaining amount (depth) of the litters was in the order of P. wilsonii - B. platyphlla forest (5.97 mm) > P. tabulaeformis forest (5.59 mm) > L. principis-rupprechtii forest (5.46 mm) >P. wilsonii forest (4.30 mm) > C. multiglorus - R. xanthina shrubs (3.03 mm)>P. davidiana - B. platyphlla forest (2.13 mm).

  1. C, N and P fertilization in an Amazonian rainforest supports stoichiometric dissimilarity as a driver of litter diversity effects on decomposition.

    PubMed

    Barantal, Sandra; Schimann, Heidy; Fromin, Nathalie; Hättenschwiler, Stephan

    2014-12-07

    Plant leaf litter generally decomposes faster as a group of different species than when individual species decompose alone, but underlying mechanisms of these diversity effects remain poorly understood. Because resource C : N : P stoichiometry (i.e. the ratios of these key elements) exhibits strong control on consumers, we supposed that stoichiometric dissimilarity of litter mixtures (i.e. the divergence in C : N : P ratios among species) improves resource complementarity to decomposers leading to faster mixture decomposition. We tested this hypothesis with: (i) a wide range of leaf litter mixtures of neotropical tree species varying in C : N : P dissimilarity, and (ii) a nutrient addition experiment (C, N and P) to create stoichiometric similarity. Litter mixtures decomposed in the field using two different types of litterbags allowing or preventing access to soil fauna. Litter mixture mass loss was higher than expected from species decomposing singly, especially in presence of soil fauna. With fauna, synergistic litter mixture effects increased with increasing stoichiometric dissimilarity of litter mixtures and this positive relationship disappeared with fertilizer addition. Our results indicate that litter stoichiometric dissimilarity drives mixture effects via the nutritional requirements of soil fauna. Incorporating ecological stoichiometry in biodiversity research allows refinement of the underlying mechanisms of how changing biodiversity affects ecosystem functioning. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. First-passage problems: A probabilistic dynamic analysis for degraded structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1990-01-01

    Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.

  3. ℓ(p)-Norm multikernel learning approach for stock market price forecasting.

    PubMed

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ(1)-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ(p)-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ(1)-norm multiple support vector regression model.

  4. Shock-wave flow regimes at entry into the diffuser of a hypersonic ramjet engine: Influence of physical properties of the gas medium

    NASA Astrophysics Data System (ADS)

    Tarnavskii, G. A.

    2006-07-01

    The physical aspects of the effective-adiabatic-exponent model making it possible to decompose the total problem on modeling of high-velocity gas flows into individual subproblems (“physicochemical processes” and “ aeromechanics”), which ensures the creation of a universal and efficient computer complex divided into a number of independent units, have been analyzed. Shock-wave structures appearing at entry into the duct of a hypersonic aircraft have been investigated based on this methodology, and the influence of the physical properties of the gas medium in a wide range of variations of the effective adiabatic exponent has been studied.

  5. Registering Cortical Surfaces Based on Whole-Brain Structural Connectivity and Continuous Connectivity Analysis

    PubMed Central

    Gutman, Boris; Leonardo, Cassandra; Jahanshad, Neda; Hibar, Derrek; Eschen-burg, Kristian; Nir, Talia; Villalon, Julio; Thompson, Paul

    2014-01-01

    We present a framework for registering cortical surfaces based on tractography-informed structural connectivity. We define connectivity as a continuous kernel on the product space of the cortex, and develop a method for estimating this kernel from tractography fiber models. Next, we formulate the kernel registration problem, and present a means to non-linearly register two brains’ continuous connectivity profiles. We apply theoretical results from operator theory to develop an algorithm for decomposing the connectome into its shared and individual components. Lastly, we extend two discrete connectivity measures to the continuous case, and apply our framework to 98 Alzheimer’s patients and controls. Our measures show significant differences between the two groups. PMID:25320795

  6. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  7. Attitude control of the space construction base: A modular approach

    NASA Technical Reports Server (NTRS)

    Oconnor, D. A.

    1982-01-01

    A planar model of a space base and one module is considered. For this simplified system, a feedback controller which is compatible with the modular construction method is described. The systems dynamics are decomposed into two parts corresponding to base and module. The information structure of the problem is non-classical in that not all system information is supplied to each controller. The base controller is designed to accommodate structural changes that occur as the module is added and the module controller is designed to regulate its own states and follow commands from the base. Overall stability of the system is checked by Liapunov analysis and controller effectiveness is verified by computer simulation.

  8. The Design Manager's Aid for Intelligent Decomposition (DeMAID)

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1994-01-01

    Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed, the proposed system can be decomposed to identify its hierarchical structure. The design manager's aid for intelligent decomposition (DeMAID) is a knowledge based system for ordering the sequence of modules and identifying a possible multilevel structure for design. Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save considerable money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined.

  9. Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System

    NASA Astrophysics Data System (ADS)

    Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu

    2017-05-01

    This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.

  10. Pigmented skin lesion detection using random forest and wavelet-based texture

    NASA Astrophysics Data System (ADS)

    Hu, Ping; Yang, Tie-jun

    2016-10-01

    The incidence of cutaneous malignant melanoma, a disease of worldwide distribution and is the deadliest form of skin cancer, has been rapidly increasing over the last few decades. Because advanced cutaneous melanoma is still incurable, early detection is an important step toward a reduction in mortality. Dermoscopy photographs are commonly used in melanoma diagnosis and can capture detailed features of a lesion. A great variability exists in the visual appearance of pigmented skin lesions. Therefore, in order to minimize the diagnostic errors that result from the difficulty and subjectivity of visual interpretation, an automatic detection approach is required. The objectives of this paper were to propose a hybrid method using random forest and Gabor wavelet transformation to accurately differentiate which part belong to lesion area and the other is not in a dermoscopy photographs and analyze segmentation accuracy. A random forest classifier consisting of a set of decision trees was used for classification. Gabor wavelets transformation are the mathematical model of visual cortical cells of mammalian brain and an image can be decomposed into multiple scales and multiple orientations by using it. The Gabor function has been recognized as a very useful tool in texture analysis, due to its optimal localization properties in both spatial and frequency domain. Texture features based on Gabor wavelets transformation are found by the Gabor filtered image. Experiment results indicate the following: (1) the proposed algorithm based on random forest outperformed the-state-of-the-art in pigmented skin lesions detection (2) and the inclusion of Gabor wavelet transformation based texture features improved segmentation accuracy significantly.

  11. A comparison of algorithms for inference and learning in probabilistic graphical models.

    PubMed

    Frey, Brendan J; Jojic, Nebojsa

    2005-09-01

    Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.

  12. Comparison study of image quality and effective dose in dual energy chest digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Choi, Sunghoon; Lee, Haenghwa; Kim, Dohyeon; Choi, Seungyeon; Kim, Hee-Joung

    2018-07-01

    The present study aimed to introduce a recently developed digital tomosynthesis system for the chest and describe the procedure for acquiring dual energy bone decomposed tomosynthesis images. Various beam quality and reconstruction algorithms were evaluated for acquiring dual energy chest digital tomosynthesis (CDT) images and the effective dose was calculated with ion chamber and Monte Carlo simulations. The results demonstrated that dual energy CDT improved visualization of the lung field by eliminating the bony structures. In addition, qualitative and quantitative image quality of dual energy CDT using iterative reconstruction was better than that with filtered backprojection (FBP) algorithm. The contrast-to-noise ratio and figure of merit values of dual energy CDT acquired with iterative reconstruction were three times better than those acquired with FBP reconstruction. The difference in the image quality according to the acquisition conditions was not noticeable, but the effective dose was significantly affected by the acquisition condition. The high energy acquisition condition using 130 kVp recorded a relatively high effective dose. We conclude that dual energy CDT has the potential to compensate for major problems in CDT due to decomposed bony structures, which induce significant artifacts. Although there are many variables in the clinical practice, our results regarding reconstruction algorithms and acquisition conditions may be used as the basis for clinical use of dual energy CDT imaging.

  13. Draft Genome Sequence of the Lignocellulose Decomposer Thermobifida fusca Strain TM51

    PubMed Central

    Tóth, Ákos; Barna, Terézia; Nagy, István; Horváth, Balázs; Nagy, István; Táncsics, András; Kriszt, Balázs; Baka, Erzsébet; Fekete, Csaba

    2013-01-01

    Here, we present the complete genome sequence of Thermobifida fusca strain TM51, which was isolated from the hot upper layer of a compost pile in Hungary. T. fusca TM51 is a thermotolerant, aerobic actinomycete with outstanding lignocellulose-decomposing activity. PMID:23846276

  14. Decomposing University Grades: A Longitudinal Study of Students and Their Instructors

    ERIC Educational Resources Information Center

    Beenstock, Michael; Feldman, Dan

    2018-01-01

    First-degree course grades for a cohort of social science students are matched to their instructors, and are statistically decomposed into departmental, course, instructor, and student components. Student ability is measured alternatively by university acceptance scores, or by fixed effects estimated using panel data methods. After controlling for…

  15. Gaze Fluctuations Are Not Additively Decomposable: Reply to Bogartz and Staub

    ERIC Educational Resources Information Center

    Kelty-Stephen, Damian G.; Mirman, Daniel

    2013-01-01

    Our previous work interpreted single-lognormal fits to inter-gaze distance (i.e., "gaze steps") histograms as evidence of multiplicativity and hence interactions across scales in visual cognition. Bogartz and Staub (2012) proposed that gaze steps are additively decomposable into fixations and saccades, matching the histograms better and…

  16. Decomposing Achievement Gaps among OECD Countries

    ERIC Educational Resources Information Center

    Zhang, Liang; Lee, Kristen A.

    2011-01-01

    In this study, we use decomposition methods on PISA 2006 data to compare student academic performance across OECD countries. We first establish an empirical model to explain the variation in academic performance across individuals, and then use the Oaxaca-Blinder decomposition method to decompose the achievement gap between each of the OECD…

  17. Color image encryption by using Yang-Gu mixture amplitude-phase retrieval algorithm in gyrator transform domain and two-dimensional Sine logistic modulation map

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Liu, Benqing; Wang, Qiang; Li, Ye; Liang, Junli

    2015-12-01

    A color image encryption scheme is proposed based on Yang-Gu mixture amplitude-phase retrieval algorithm and two-coupled logistic map in gyrator transform domain. First, the color plaintext image is decomposed into red, green and blue components, which are scrambled individually by three random sequences generated by using the two-dimensional Sine logistic modulation map. Second, each scrambled component is encrypted into a real-valued function with stationary white noise distribution in the iterative amplitude-phase retrieval process in the gyrator transform domain, and then three obtained functions are considered as red, green and blue channels to form the color ciphertext image. Obviously, the ciphertext image is real-valued function and more convenient for storing and transmitting. In the encryption and decryption processes, the chaotic random phase mask generated based on logistic map is employed as the phase key, which means that only the initial values are used as private key and the cryptosystem has high convenience on key management. Meanwhile, the security of the cryptosystem is enhanced greatly because of high sensitivity of the private keys. Simulation results are presented to prove the security and robustness of the proposed scheme.

  18. Periodic orbit spectrum in terms of Ruelle-Pollicott resonances

    NASA Astrophysics Data System (ADS)

    Leboeuf, P.

    2004-02-01

    Fully chaotic Hamiltonian systems possess an infinite number of classical solutions which are periodic, e.g., a trajectory “p” returns to its initial conditions after some fixed time τp. Our aim is to investigate the spectrum {τ1,τ2,…} of periods of the periodic orbits. An explicit formula for the density ρ(τ)=∑pδ(τ-τp) is derived in terms of the eigenvalues of the classical evolution operator. The density is naturally decomposed into a smooth part plus an interferent sum over oscillatory terms. The frequencies of the oscillatory terms are given by the imaginary part of the complex eigenvalues (Ruelle-Pollicott resonances). For large periods, corrections to the well-known exponential growth of the smooth part of the density are obtained. An alternative formula for ρ(τ) in terms of the zeros and poles of the Ruelle ζ function is also discussed. The results are illustrated with the geodesic motion in billiards of constant negative curvature. Connections with the statistical properties of the corresponding quantum eigenvalues, random-matrix theory, and discrete maps are also considered. In particular, a random-matrix conjecture is proposed for the eigenvalues of the classical evolution operator of chaotic billiards.

  19. A diffusion approximation for ocean wave scatterings by randomly distributed ice floes

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Shen, Hayley

    2016-11-01

    This study presents a continuum approach using a diffusion approximation method to solve the scattering of ocean waves by randomly distributed ice floes. In order to model both strong and weak scattering, the proposed method decomposes the wave action density function into two parts: the transmitted part and the scattered part. For a given wave direction, the transmitted part of the wave action density is defined as the part of wave action density in the same direction before the scattering; and the scattered part is a first order Fourier series approximation for the directional spreading caused by scattering. An additional approximation is also adopted for simplification, in which the net directional redistribution of wave action by a single scatterer is assumed to be the reflected wave action of a normally incident wave into a semi-infinite ice cover. Other required input includes the mean shear modulus, diameter and thickness of ice floes, and the ice concentration. The directional spreading of wave energy from the diffusion approximation is found to be in reasonable agreement with the previous solution using the Boltzmann equation. The diffusion model provides an alternative method to implement wave scattering into an operational wave model.

  20. Pattern formations and optimal packing.

    PubMed

    Mityushev, Vladimir

    2016-04-01

    Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. From Weakly Chaotic Dynamics to Deterministic Subdiffusion via Copula Modeling

    NASA Astrophysics Data System (ADS)

    Nazé, Pierre

    2018-03-01

    Copula modeling consists in finding a probabilistic distribution, called copula, whereby its coupling with the marginal distributions of a set of random variables produces their joint distribution. The present work aims to use this technique to connect the statistical distributions of weakly chaotic dynamics and deterministic subdiffusion. More precisely, we decompose the jumps distribution of Geisel-Thomae map into a bivariate one and determine the marginal and copula distributions respectively by infinite ergodic theory and statistical inference techniques. We verify therefore that the characteristic tail distribution of subdiffusion is an extreme value copula coupling Mittag-Leffler distributions. We also present a method to calculate the exact copula and joint distributions in the case where weakly chaotic dynamics and deterministic subdiffusion statistical distributions are already known. Numerical simulations and consistency with the dynamical aspects of the map support our results.

  2. Coupled flow and deformations in granular systems beyond the pendular regime

    NASA Astrophysics Data System (ADS)

    Yuan, Chao; Chareyre, Bruno; Darve, Felix

    2017-06-01

    A pore-scale numerical model is proposed for simulating the quasi-static primary drainage and the hydro-mechanical couplings in multiphase granular systems. The solid skeleton is idealized to a dense random packing of polydisperse spheres by DEM. The fluids (nonwetting and wetting phases) space is decomposed to a network of tetrahedral pores based on the Regular Triangulation method. The local drainage rules and invasion logic are defined. The fluid forces acting on solid grains are formulated. The model can simulate the hydraulic evolution from a fully saturated state to a low level of saturation but beyond the pendular regime. The features of wetting phase entrapments and capillary fingering can also be reproduced. Finally, a primary drainage test is performed on a 40,000 spheres of sample. The water retention curve is obtained. The solid skeleton first shrinks then swells.

  3. Certifying an Irreducible 1024-Dimensional Photonic State Using Refined Dimension Witnesses.

    PubMed

    Aguilar, Edgar A; Farkas, Máté; Martínez, Daniel; Alvarado, Matías; Cariñe, Jaime; Xavier, Guilherme B; Barra, Johanna F; Cañas, Gustavo; Pawłowski, Marcin; Lima, Gustavo

    2018-06-08

    We report on a new class of dimension witnesses, based on quantum random access codes, which are a function of the recorded statistics and that have different bounds for all possible decompositions of a high-dimensional physical system. Thus, it certifies the dimension of the system and has the new distinct feature of identifying whether the high-dimensional system is decomposable in terms of lower dimensional subsystems. To demonstrate the practicability of this technique, we used it to experimentally certify the generation of an irreducible 1024-dimensional photonic quantum state. Therefore, certifying that the state is not multipartite or encoded using noncoupled different degrees of freedom of a single photon. Our protocol should find applications in a broad class of modern quantum information experiments addressing the generation of high-dimensional quantum systems, where quantum tomography may become intractable.

  4. Classifying bent radio galaxies from a mixture of point-like/extended images with Machine Learning.

    NASA Astrophysics Data System (ADS)

    Bastien, David; Oozeer, Nadeem; Somanah, Radhakrishna

    2017-05-01

    The hypothesis that bent radio sources are supposed to be found in rich, massive galaxy clusters and the avalibility of huge amount of data from radio surveys have fueled our motivation to use Machine Learning (ML) to identify bent radio sources and as such use them as tracers for galaxy clusters. The shapelet analysis allowed us to decompose radio images into 256 features that could be fed into the ML algorithm. Additionally, ideas from the field of neuro-psychology helped us to consider training the machine to identify bent galaxies at different orientations. From our analysis, we found that the Random Forest algorithm was the most effective with an accuracy rate of 92% for a classification of point and extended sources as well as an accuracy of 80% for bent and unbent classification.

  5. Wave theory of turbulence in compressible media (acoustic theory of turbulence)

    NASA Technical Reports Server (NTRS)

    Kentzer, C. P.

    1975-01-01

    The generation and the transmission of sound in turbulent flows are treated as one of the several aspects of wave propagation in turbulence. Fluid fluctuations are decomposed into orthogonal Fourier components, with five interacting modes of wave propagation: two vorticity modes, one entropy mode, and two acoustic modes. Wave interactions, governed by the inhomogeneous and nonlinear terms of the perturbed Navier-Stokes equations, are modeled by random functions which give the rates of change of wave amplitudes equal to the averaged interaction terms. The statistical framework adopted is a quantum-like formulation in terms of complex distribution functions. The spatial probability distributions are given by the squares of the absolute values of the complex characteristic functions. This formulation results in nonlinear diffusion-type transport equations for the probability densities of the five modes of wave propagation.

  6. Certifying an Irreducible 1024-Dimensional Photonic State Using Refined Dimension Witnesses

    NASA Astrophysics Data System (ADS)

    Aguilar, Edgar A.; Farkas, Máté; Martínez, Daniel; Alvarado, Matías; Cariñe, Jaime; Xavier, Guilherme B.; Barra, Johanna F.; Cañas, Gustavo; Pawłowski, Marcin; Lima, Gustavo

    2018-06-01

    We report on a new class of dimension witnesses, based on quantum random access codes, which are a function of the recorded statistics and that have different bounds for all possible decompositions of a high-dimensional physical system. Thus, it certifies the dimension of the system and has the new distinct feature of identifying whether the high-dimensional system is decomposable in terms of lower dimensional subsystems. To demonstrate the practicability of this technique, we used it to experimentally certify the generation of an irreducible 1024-dimensional photonic quantum state. Therefore, certifying that the state is not multipartite or encoded using noncoupled different degrees of freedom of a single photon. Our protocol should find applications in a broad class of modern quantum information experiments addressing the generation of high-dimensional quantum systems, where quantum tomography may become intractable.

  7. [Computer-assisted education in problem-solving in neurology; a randomized educational study].

    PubMed

    Weverling, G J; Stam, J; ten Cate, T J; van Crevel, H

    1996-02-24

    To determine the effect of computer-based medical teaching (CBMT) as a supplementary method to teach clinical problem-solving during the clerkship in neurology. Randomized controlled blinded study. Academic Medical Centre, Amsterdam, the Netherlands. 103 Students were assigned at random to a group with access to CBMT and a control group. CBMT consisted of 20 computer-simulated patients with neurological diseases, and was permanently available during five weeks to students in the CBMT group. The ability to recognize and solve neurological problems was assessed with two free-response tests, scored by two blinded observers. The CBMT students scored significantly better on the test related to the CBMT cases (mean score 7.5 on a zero to 10 point scale; control group 6.2; p < 0.001). There was no significant difference on the control test not related to the problems practised with CBMT. CBMT can be an effective method for teaching clinical problem-solving, when used as a supplementary teaching facility during a clinical clerkship. The increased ability to solve problems learned by CBMT had no demonstrable effect on the performance with other neurological problems.

  8. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  9. On the utility of the multi-level algorithm for the solution of nearly completely decomposable Markov chains

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Horton, Graham

    1994-01-01

    Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.

  10. Methods for assessing the impact of avermectins on the decomposer community of sheep pastures.

    PubMed

    King, K L

    1993-06-01

    This paper outlines methods which can be used in the field assessment of potentially toxic chemicals such as the avermectins. The procedures focus on measuring the effects of the drug on decomposer organisms and the nutrient cycling process in pastures grazed by sheep. Measurements of decomposer activity are described along with methods for determining dry and organic matter loss and mineral loss from dung to the underlying soil. Sampling methods for both micro- and macro-invertebrates are discussed along with determination of the percentage infection of plant roots with vesicular-arbuscular mycorrhizal fungi. An integrated sampling unit for assessing the ecotoxicity of ivermectin in pastures grazed by sheep is presented.

  11. Fluidized bed silicon deposition from silane

    NASA Technical Reports Server (NTRS)

    Hsu, George C. (Inventor); Levin, Harry (Inventor); Hogle, Richard A. (Inventor); Praturi, Ananda (Inventor); Lutwack, Ralph (Inventor)

    1982-01-01

    A process and apparatus for thermally decomposing silicon containing gas for deposition on fluidized nucleating silicon seed particles is disclosed. Silicon seed particles are produced in a secondary fluidized reactor by thermal decomposition of a silicon containing gas. The thermally produced silicon seed particles are then introduced into a primary fluidized bed reactor to form a fluidized bed. Silicon containing gas is introduced into the primary reactor where it is thermally decomposed and deposited on the fluidized silicon seed particles. Silicon seed particles having the desired amount of thermally decomposed silicon product thereon are removed from the primary fluidized reactor as ultra pure silicon product. An apparatus for carrying out this process is also disclosed.

  12. Fluidized bed silicon deposition from silane

    NASA Technical Reports Server (NTRS)

    Hsu, George (Inventor); Levin, Harry (Inventor); Hogle, Richard A. (Inventor); Praturi, Ananda (Inventor); Lutwack, Ralph (Inventor)

    1984-01-01

    A process and apparatus for thermally decomposing silicon containing gas for deposition on fluidized nucleating silicon seed particles is disclosed. Silicon seed particles are produced in a secondary fluidized reactor by thermal decomposition of a silicon containing gas. The thermally produced silicon seed particles are then introduced into a primary fluidized bed reactor to form a fludized bed. Silicon containing gas is introduced into the primary reactor where it is thermally decomposed and deposited on the fluidized silicon seed particles. Silicon seed particles having the desired amount of thermally decomposed silicon product thereon are removed from the primary fluidized reactor as ultra pure silicon product. An apparatus for carrying out this process is also disclosed.

  13. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    PubMed

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A hybrid wavelet analysis-cloud model data-extending approach for meteorologic and hydrologic time series

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Ding, Hao; Singh, Vijay P.; Shang, Xiaosan; Liu, Dengfeng; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing

    2015-05-01

    For scientific and sustainable management of water resources, hydrologic and meteorologic data series need to be often extended. This paper proposes a hybrid approach, named WA-CM (wavelet analysis-cloud model), for data series extension. Wavelet analysis has time-frequency localization features, known as "mathematics microscope," that can decompose and reconstruct hydrologic and meteorologic series by wavelet transform. The cloud model is a mathematical representation of fuzziness and randomness and has strong robustness for uncertain data. The WA-CM approach first employs the wavelet transform to decompose the measured nonstationary series and then uses the cloud model to develop an extension model for each decomposition layer series. The final extension is obtained by summing the results of extension of each layer. Two kinds of meteorologic and hydrologic data sets with different characteristics and different influence of human activity from six (three pairs) representative stations are used to illustrate the WA-CM approach. The approach is also compared with four other methods, which are conventional correlation extension method, Kendall-Theil robust line method, artificial neural network method (back propagation, multilayer perceptron, and radial basis function), and single cloud model method. To evaluate the model performance completely and thoroughly, five measures are used, which are relative error, mean relative error, standard deviation of relative error, root mean square error, and Thiel inequality coefficient. Results show that the WA-CM approach is effective, feasible, and accurate and is found to be better than other four methods compared. The theory employed and the approach developed here can be applied to extension of data in other areas as well.

  15. Irritable bowel syndrome is concentrated in people with higher educations in Iran: an inequality analysis.

    PubMed

    Mansouri, Asieh; Rarani, Mostafa Amini; Fallahi, Mosayeb; Alvandi, Iman

    2017-01-01

    Like any other health-related disorder, irritable bowel syndrome (IBS) has a differential distribution with respect to socioeconomic factors. This study aimed to estimate and decompose educational inequalities in the prevalence of IBS. Sampling was performed using a multi-stage random cluster sampling approach. The data of 1,850 residents of Kish Island aged 15 years or older were included, and the determinants of IBS were identified using a generalized estimating equation regression model. The concentration index of educational inequality in cases of IBS was estimated and decomposed as the specific inequality index. The prevalence of IBS in this study was 21.57% (95% confidence interval [CI], 19.69 to 23.44%). The concentration index of IBS was 0.20 (95% CI, 0.14 to 0.26). A multivariable regression model revealed that age, sex, level of education, marital status, anxiety, and poor general health were significant determinants of IBS. In the decomposition analysis, level of education (89.91%), age (-11.99%), and marital status (9.11%) were the three main contributors to IBS inequality. Anxiety and poor general health were the next two contributors to IBS inequality, and were responsible for more than 12% of the total observed inequality. The main contributors of IBS inequality were education level, age, and marital status. Given the high percentage of anxious individuals among highly educated, young, single, and divorced people, we can conclude that all contributors to IBS inequality may be partially influenced by psychological factors. Therefore, programs that promote the development of mental health to alleviate the abovementioned inequality in this population are highly warranted.

  16. Node-Based Learning of Multiple Gaussian Graphical Models

    PubMed Central

    Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In

    2014-01-01

    We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137

  17. DOMAIN DECOMPOSITION METHOD APPLIED TO A FLOW PROBLEM Norberto C. Vera Guzmán Institute of Geophysics, UNAM

    NASA Astrophysics Data System (ADS)

    Vera, N. C.; GMMC

    2013-05-01

    In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.

  18. The behavior of plasma with an arbitrary degree of degeneracy of electron gas in the conductive layer

    NASA Astrophysics Data System (ADS)

    Latyshev, A. V.; Gordeeva, N. M.

    2017-09-01

    We obtain an analytic solution of the boundary problem for the behavior (fluctuations) of an electron plasma with an arbitrary degree of degeneracy of the electron gas in the conductive layer in an external electric field. We use the kinetic Vlasov-Boltzmann equation with the Bhatnagar-Gross-Krook collision integral and the Maxwell equation for the electric field. We use the mirror boundary conditions for the reflections of electrons from the layer boundary. The boundary problem reduces to a one-dimensional problem with a single velocity. For this, we use the method of consecutive approximations, linearization of the equations with respect to the absolute distribution of the Fermi-Dirac electrons, and the conservation law for the number of particles. Separation of variables then helps reduce the problem equations to a characteristic system of equations. In the space of generalized functions, we find the eigensolutions of the initial system, which correspond to the continuous spectrum (Van Kampen mode). Solving the dispersion equation, we then find the eigensolutions corresponding to the adjoint and discrete spectra (Drude and Debye modes). We then construct the general solution of the boundary problem by decomposing it into the eigensolutions. The coefficients of the decomposition are given by the boundary conditions. This allows obtaining the decompositions of the distribution function and the electric field in explicit form.

  19. High Penetration of Electrical Vehicles in Microgrids: Threats and Opportunities

    NASA Astrophysics Data System (ADS)

    Khederzadeh, Mojtaba; Khalili, Mohammad

    2014-10-01

    Given that the microgrid concept is the building block of future electric distribution systems and electrical vehicles (EVs) are the future of transportation market, in this paper, the impact of EVs on the performance of microgrids is investigated. Demand-side participation is used to cope with increasing demand for EV charging. The problem of coordination of EV charging and discharging (with vehicle-to-grid (V2G) functionality) and demand response is formulated as a market-clearing mechanism that accepts bids from the demand and supply sides and takes into account the constraints put forward by different parts. Therefore, a day-ahead market with detailed bids and offers within the microgrid is designed whose objective is to maximize the social welfare which is the difference between the value that consumers attach to the electrical energy they buy plus the benefit of the EV owners participating in the V2G functionality and the cost of producing/purchasing this energy. As the optimization problem is a mixed integer nonlinear programming one, it is decomposed into one master problem for energy scheduling and one subproblem for power flow computation. The two problems are solved iteratively by interfacing MATLAB with GAMS. Simulation results on a sample microgrid with different residential, commercial and industrial consumers with associated demand-side biddings and different penetration level of EVs support the proposed formulation of the problem and the applied methods.

  20. Thermal energy storage to minimize cost and improve efficiency of a polygeneration district energy system in a real-time electricity market

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Kody M.; Kim, Jong Suk; Cole, Wesley J.

    2016-10-01

    District energy systems can produce low-cost utilities for large energy networks, but can also be a resource for the electric grid by their ability to ramp production or to store thermal energy by responding to real-time market signals. In this work, dynamic optimization exploits the flexibility of thermal energy storage by determining optimal times to store and extract excess energy. This concept is applied to a polygeneration distributed energy system with combined heat and power, district heating, district cooling, and chilled water thermal energy storage. The system is a university campus responsible for meeting the energy needs of tens ofmore » thousands of people. The objective for the dynamic optimization problem is to minimize cost over a 24-h period while meeting multiple loads in real time. The paper presents a novel algorithm to solve this dynamic optimization problem with energy storage by decomposing the problem into multiple static mixed-integer nonlinear programming (MINLP) problems. Another innovative feature of this work is the study of a large, complex energy network which includes the interrelations of a wide variety of energy technologies. Results indicate that a cost savings of 16.5% is realized when the system can participate in the wholesale electricity market.« less

  1. A Tale of Three Classes: Case Studies in Course Complexity

    ERIC Educational Resources Information Center

    Gill, T. Grandon; Jones, Joni

    2010-01-01

    This paper examines the question of decomposability versus complexity of teaching situations by presenting three case studies of MIS courses. Because all three courses were highly successful in their observed outcomes, the paper hypothesizes that if the attributes of effective course design are decomposable, one would expect to see a large number…

  2. Potassium cuprate (3)

    NASA Technical Reports Server (NTRS)

    Wahl, Kurt; Klemm, Wilhelm

    1988-01-01

    The reaction of KO2 and CuO in an O2 atmosphere at 400 to 450 C results in KCuO, which is a steel-blue and nonmagnetic compound. This substance exhibits a characteristic X-ray diagram; it decomposes in dilute acids to form O2 and Cu(II) salts. It decomposes thermally above 500 C.

  3. An improved triple collocation algorithm for decomposing autocorrelated and white soil moisture retrieval errors

    USDA-ARS?s Scientific Manuscript database

    If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...

  4. Kill the Song--Steal the Show: What Does Distinguish Predicative Metaphors from Decomposable Idioms?

    ERIC Educational Resources Information Center

    Caillies, Stephanie; Declercq, Christelle

    2011-01-01

    This study examined the semantic processing difference between decomposable idioms and novel predicative metaphors. It was hypothesized that idiom comprehension results from the retrieval of a figurative meaning stored in memory, that metaphor comprehension requires a sense creation process and that this process difference affects the processing…

  5. A review of bacterial interactions with blow flies (Diptera: Calliphoridae) of medical, veterinary, and forensic importance

    USDA-ARS?s Scientific Manuscript database

    Blow flies are commonly associated with decomposing material. In most cases, the larvae are found feeding on decomposing vertebrate remains. However, some species have specialized to feed on living tissue or can survive on other alternate resources like feces. Because of their affiliation with su...

  6. When microbes and consumers determine the limiting nutrient of autotrophs: a theoretical analysis

    PubMed Central

    Cherif, Mehdi; Loreau, Michel

    2008-01-01

    Ecological stoichiometry postulates that differential nutrient recycling of elements such as nitrogen and phosphorus by consumers can shift the element that limits plant growth. However, this hypothesis has so far considered the effect of consumers, mostly herbivores, out of their food-web context. Microbial decomposers are important components of food webs, and might prove as important as consumers in changing the availability of elements for plants. In this theoretical study, we investigate how decomposers determine the nutrient that limits plants, both by feeding on nutrients and organic carbon released by plants and consumers, and by being fed upon by omnivorous consumers. We show that decomposers can greatly alter the relative availability of nutrients for plants. The type of limiting nutrient promoted by decomposers depends on their own elemental composition and, when applicable, on their ingestion by consumers. Our results highlight the limitations of previous stoichiometric theories of plant nutrient limitation control, which often ignored trophic levels other than plants and herbivores. They also suggest that detrital chains play an important role in determining plant nutrient limitation in many ecosystems. PMID:18854301

  7. Online interventions for problem gamblers with and without co-occurring problem drinking: study protocol of a randomized controlled trial.

    PubMed

    Cunningham, John A; Hodgins, David C; Keough, Matthew; Hendershot, Christian S; Bennett, Kylie; Bennett, Anthony; Godinho, Alexandra

    2018-05-25

    The current randomized controlled trial seeks to evaluate whether providing access to an Internet intervention for problem drinking in addition to an Internet intervention for problem gambling is beneficial for participants with gambling problems who do or do not have co-occurring problem drinking. Potential participants will be recruited online via a comprehensive advertisement strategy, if they meet the criteria for problem gambling. As part of the baseline measures, problem drinking will also be assessed. Eligible participants (N = 280) who agree to partake in the study and to be followed up for 6 months will be randomized into one of two versions of an Internet intervention for gamblers: an intervention that targets only gambling issues (G-only) and one that combines a gambling intervention with an intervention for problem drinking (G + A). For problem gamblers who exhibit co-occurring problem drinking, it is predicted that participants who are provided access to the G + A intervention will demonstrate a significantly greater level of reduction in gambling outcomes at 6 months compared to those provided access to the G-only intervention. This trial will expand upon the current research on Internet interventions for addictions and inform the development of treatments for those with co-occurring problem drinking and gambling. ClinicalTrials.gov, NCT03323606 . Registered on 24 October 2017.

  8. The fast algorithm of spark in compressive sensing

    NASA Astrophysics Data System (ADS)

    Xie, Meihua; Yan, Fengxia

    2017-01-01

    Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.

  9. Domain decomposition and matching for time-domain analysis of motions of ships advancing in head sea

    NASA Astrophysics Data System (ADS)

    Tang, Kai; Zhu, Ren-chuan; Miao, Guo-ping; Fan, Ju

    2014-08-01

    A domain decomposition and matching method in the time-domain is outlined for simulating the motions of ships advancing in waves. The flow field is decomposed into inner and outer domains by an imaginary control surface, and the Rankine source method is applied to the inner domain while the transient Green function method is used in the outer domain. Two initial boundary value problems are matched on the control surface. The corresponding numerical codes are developed, and the added masses, wave exciting forces and ship motions advancing in head sea for Series 60 ship and S175 containership, are presented and verified. A good agreement has been obtained when the numerical results are compared with the experimental data and other references. It shows that the present method is more efficient because of the panel discretization only in the inner domain during the numerical calculation, and good numerical stability is proved to avoid divergence problem regarding ships with flare.

  10. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks.

    PubMed

    Yan, Zheng; Wang, Jun

    2014-03-01

    This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.

  11. Analysis of Drop Oscillations Excited by an Electrical Point Force in AC EWOD

    NASA Astrophysics Data System (ADS)

    Oh, Jung Min; Ko, Sung Hee; Kang, Kwan Hyoung

    2008-03-01

    Recently, a few researchers have reported the oscillation of a sessile drop in AC EWOD (electrowetting on dielectrics), and some of its consequences. The drop oscillation problem in AC EWOD is associated with various applications based on electrowetting such as LOC (lab-on-a-chip), liquid lens, and electronic display. However, no theoretical analysis of the problem has been attempted yet. In the present paper, we propose a theoretical model to analyze the oscillation by applying the conventional method to analyze the drop oscillation. The domain perturbation method is used to derive the shape mode equations under the assumptions of weak viscous flow and small deformation. The Maxwell stress is exerted on the three-phase contact line of the droplet like a point force. The force is regarded as a delta function, and is decomposed into the driving forces of each shape mode. The theoretical results on the shape and the frequency responses are compared with experiments, which shows a qualitative agreement.

  12. Wave chaos in the elastic disk.

    PubMed

    Sondergaard, Niels; Tanner, Gregor

    2002-12-01

    The relation between the elastic wave equation for plane, isotropic bodies and an underlying classical ray dynamics is investigated. We study, in particular, the eigenfrequencies of an elastic disk with free boundaries and their connection to periodic rays inside the circular domain. Even though the problem is separable, wave mixing between the shear and pressure component of the wave field at the boundary leads to an effective stochastic part in the ray dynamics. This introduces phenomena typically associated with classical chaos as, for example, an exponential increase in the number of periodic orbits. Classically, the problem can be decomposed into an integrable part and a simple binary Markov process. Similarly, the wave equation can, in the high-frequency limit, be mapped onto a quantum graph. Implications of this result for the level statistics are discussed. Furthermore, a periodic trace formula is derived from the scattering matrix based on the inside-outside duality between eigenmodes and scattering solutions and periodic orbits are identified by Fourier transforming the spectral density.

  13. Designing and optimizing a healthcare kiosk for the community.

    PubMed

    Lyu, Yongqiang; Vincent, Christopher James; Chen, Yu; Shi, Yuanchun; Tang, Yida; Wang, Wenyao; Liu, Wei; Zhang, Shuangshuang; Fang, Ke; Ding, Ji

    2015-03-01

    Investigating new ways to deliver care, such as the use of self-service kiosks to collect and monitor signs of wellness, supports healthcare efficiency and inclusivity. Self-service kiosks offer this potential, but there is a need for solutions to meet acceptable standards, e.g. provision of accurate measurements. This study investigates the design and optimization of a prototype healthcare kiosk to collect vital signs measures. The design problem was decomposed, formalized, focused and used to generate multiple solutions. Systematic implementation and evaluation allowed for the optimization of measurement accuracy, first for individuals and then for a population. The optimized solution was tested independently to check the suitability of the methods, and quality of the solution. The process resulted in a reduction of measurement noise and an optimal fit, in terms of the positioning of measurement devices. This guaranteed the accuracy of the solution and provides a general methodology for similar design problems. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  15. High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster

    NASA Astrophysics Data System (ADS)

    Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku

    2015-01-01

    High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.

  16. A Combined Adaptive Neural Network and Nonlinear Model Predictive Control for Multirate Networked Industrial Process Control.

    PubMed

    Wang, Tong; Gao, Huijun; Qiu, Jianbin

    2016-02-01

    This paper investigates the multirate networked industrial process control problem in double-layer architecture. First, the output tracking problem for sampled-data nonlinear plant at device layer with sampling period T(d) is investigated using adaptive neural network (NN) control, and it is shown that the outputs of subsystems at device layer can track the decomposed setpoints. Then, the outputs and inputs of the device layer subsystems are sampled with sampling period T(u) at operation layer to form the index prediction, which is used to predict the overall performance index at lower frequency. Radial basis function NN is utilized as the prediction function due to its approximation ability. Then, considering the dynamics of the overall closed-loop system, nonlinear model predictive control method is proposed to guarantee the system stability and compensate the network-induced delays and packet dropouts. Finally, a continuous stirred tank reactor system is given in the simulation part to demonstrate the effectiveness of the proposed method.

  17. Nonlinear zero-sum differential game analysis by singular perturbation methods

    NASA Technical Reports Server (NTRS)

    Sinar, J.; Farber, N.

    1982-01-01

    A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.

  18. An integral equation formulation for the diffraction from convex plates and polyhedra.

    PubMed

    Asheim, Andreas; Svensson, U Peter

    2013-06-01

    A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.

  19. A quantification method for heat-decomposable methylglyoxal oligomers and its application on 1,3,5-trimethylbenzene SOA

    NASA Astrophysics Data System (ADS)

    Rodigast, Maria; Mutzel, Anke; Herrmann, Hartmut

    2017-03-01

    Methylglyoxal forms oligomeric compounds in the atmospheric aqueous particle phase, which could establish a significant contribution to the formation of aqueous secondary organic aerosol (aqSOA). Thus far, no suitable method for the quantification of methylglyoxal oligomers is available despite the great effort spent for structure elucidation. In the present study a simplified method was developed to quantify heat-decomposable methylglyoxal oligomers as a sum parameter. The method is based on the thermal decomposition of oligomers into methylglyoxal monomers. Formed methylglyoxal monomers were detected using PFBHA (o-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride) derivatisation and gas chromatography-mass spectrometry (GC/MS) analysis. The method development was focused on the heating time (varied between 15 and 48 h), pH during the heating process (pH = 1-7), and heating temperature (50, 100 °C). The optimised values of these method parameters are presented. The developed method was applied to quantify heat-decomposable methylglyoxal oligomers formed during the OH-radical oxidation of 1,3,5-trimethylbenzene (TMB) in the Leipzig aerosol chamber (LEipziger AerosolKammer, LEAK). Oligomer formation was investigated as a function of seed particle acidity and relative humidity. A fraction of heat-decomposable methylglyoxal oligomers of up to 8 % in the produced organic particle mass was found, highlighting the importance of those oligomers formed solely by methylglyoxal for SOA formation. Overall, the present study provides a new and suitable method for quantification of heat-decomposable methylglyoxal oligomers in the aqueous particle phase.

  20. Particle agglomeration and fuel decomposition in burning slurry droplets

    NASA Astrophysics Data System (ADS)

    Choudhury, P. Roy; Gerstein, Melvin

    In a burning slurry droplet the particles tend to agglomerate and produce large clusters which are difficult to burn. As a consequence, the combustion efficiency is drastically reduced. For such a droplet the nonlinear D2- t behavior associated with the formation of hard to burn agglomerates can be explained if the fuel decomposes on the surface of the particles. This paper deals with analysis and experiments with JP-10 and Diesel #2 slurries prepared with inert SiC and Al 2O 3 particles. It provides direct evidence of decomposed fuel residue on the surface of the particles heated by flame radiation. These decomposed fuel residues act as bonding agents and appear to be responsible for the observed agglomeration of particles in a slurry. Chemical analysis, scanning electron microscope photographs and finally micro-analysis by electron scattering clearly show the presence of decomposed fuel residue on the surface of the particles. Diesel #2 is decomposed relatively easily and therefore leaves a thicker deposit on SiC and forms larger agglomerates than the more stable JP-10. A surface reaction model with particles heated by flame radiation is able to describe the observed trend of the diameter history of the slurry fuel. Additional experiments with particles of lower emissivity (Al 2O 3) and radiation absorbing dye validate the theoretical model of the role of flame radiation in fuel decomposition and the formation of agglomerates in burning slurry droplets.

  1. Anomaly detection for medical images based on a one-class classification

    NASA Astrophysics Data System (ADS)

    Wei, Qi; Ren, Yinhao; Hou, Rui; Shi, Bibo; Lo, Joseph Y.; Carin, Lawrence

    2018-02-01

    Detecting an anomaly such as a malignant tumor or a nodule from medical images including mammogram, CT or PET images is still an ongoing research problem drawing a lot of attention with applications in medical diagnosis. A conventional way to address this is to learn a discriminative model using training datasets of negative and positive samples. The learned model can be used to classify a testing sample into a positive or negative class. However, in medical applications, the high unbalance between negative and positive samples poses a difficulty for learning algorithms, as they will be biased towards the majority group, i.e., the negative one. To address this imbalanced data issue as well as leverage the huge amount of negative samples, i.e., normal medical images, we propose to learn an unsupervised model to characterize the negative class. To make the learned model more flexible and extendable for medical images of different scales, we have designed an autoencoder based on a deep neural network to characterize the negative patches decomposed from large medical images. A testing image is decomposed into patches and then fed into the learned autoencoder to reconstruct these patches themselves. The reconstruction error of one patch is used to classify this patch into a binary class, i.e., a positive or a negative one, leading to a one-class classifier. The positive patches highlight the suspicious areas containing anomalies in a large medical image. The proposed method has been tested on InBreast dataset and achieves an AUC of 0.84. The main contribution of our work can be summarized as follows. 1) The proposed one-class learning requires only data from one class, i.e., the negative data; 2) The patch-based learning makes the proposed method scalable to images of different sizes and helps avoid the large scale problem for medical images; 3) The training of the proposed deep convolutional neural network (DCNN) based auto-encoder is fast and stable.

  2. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.

  3. The Effects of Schema-Broadening Instruction on Second Graders’ Word-Problem Performance and Their Ability to Represent Word Problems with Algebraic Equations: A Randomized Control Study

    PubMed Central

    Fuchs, Lynn S.; Zumeta, Rebecca O.; Schumacher, Robin Finelli; Powell, Sarah R.; Seethaler, Pamela M.; Hamlett, Carol L.; Fuchs, Douglas

    2010-01-01

    The purpose of this study was to assess the effects of schema-broadening instruction (SBI) on second graders’ word-problem-solving skills and their ability to represent the structure of word problems using algebraic equations. Teachers (n = 18) were randomly assigned to conventional word-problem instruction or SBI word-problem instruction, which taught students to represent the structural, defining features of word problems with overarching equations. Intervention lasted 16 weeks. We pretested and posttested 270 students on measures of word-problem skill; analyses that accounted for the nested structure of the data indicated superior word-problem learning for SBI students. Descriptive analyses of students’ word-problem work indicated that SBI helped students represent the structure of word problems with algebraic equations, suggesting that SBI promoted this aspect of students’ emerging algebraic reasoning. PMID:20539822

  4. Management intensity alters decomposition via biological pathways

    USGS Publications Warehouse

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future efforts to more accurately predict soil carbon dynamics under different management regimes may need to explicitly consider how changes in litter chemistry during decomposition are influenced by the specific metabolic capabilities of the extant decomposer communities.

  5. Influence of Litter Diversity on Dissolved Organic Matter Release and Soil Carbon Formation in a Mixed Beech Forest

    PubMed Central

    Scheibe, Andrea; Gleixner, Gerd

    2014-01-01

    We investigated the effect of leaf litter on below ground carbon export and soil carbon formation in order to understand how litter diversity affects carbon cycling in forest ecosystems. 13C labeled and unlabeled leaf litter of beech (Fagus sylvatica) and ash (Fraxinus excelsior), characterized by low and high decomposability, were used in a litter exchange experiment in the Hainich National Park (Thuringia, Germany). Litter was added in pure and mixed treatments with either beech or ash labeled with 13C. We collected soil water in 5 cm mineral soil depth below each treatment biweekly and determined dissolved organic carbon (DOC), δ13C values and anion contents. In addition, we measured carbon concentrations and δ13C values in the organic and mineral soil (collected in 1 cm increments) up to 5 cm soil depth at the end of the experiment. Litter-derived C contributes less than 1% to dissolved organic matter (DOM) collected in 5 cm mineral soil depth. Better decomposable ash litter released significantly more (0.50±0.17%) litter carbon than beech litter (0.17±0.07%). All soil layers held in total around 30% of litter-derived carbon, indicating the large retention potential of litter-derived C in the top soil. Interestingly, in mixed (ash and beech litter) treatments we did not find a higher contribution of better decomposable ash-derived carbon in DOM, O horizon or mineral soil. This suggest that the known selective decomposition of better decomposable litter by soil fauna has no or only minor effects on the release and formation of litter-derived DOM and soil organic matter. Overall our experiment showed that 1) litter-derived carbon is of low importance for dissolved organic carbon release and 2) litter of higher decomposability is faster decomposed, but litter diversity does not influence the carbon flow. PMID:25486628

  6. Influence of litter diversity on dissolved organic matter release and soil carbon formation in a mixed beech forest.

    PubMed

    Scheibe, Andrea; Gleixner, Gerd

    2014-01-01

    We investigated the effect of leaf litter on below ground carbon export and soil carbon formation in order to understand how litter diversity affects carbon cycling in forest ecosystems. 13C labeled and unlabeled leaf litter of beech (Fagus sylvatica) and ash (Fraxinus excelsior), characterized by low and high decomposability, were used in a litter exchange experiment in the Hainich National Park (Thuringia, Germany). Litter was added in pure and mixed treatments with either beech or ash labeled with 13C. We collected soil water in 5 cm mineral soil depth below each treatment biweekly and determined dissolved organic carbon (DOC), δ13C values and anion contents. In addition, we measured carbon concentrations and δ13C values in the organic and mineral soil (collected in 1 cm increments) up to 5 cm soil depth at the end of the experiment. Litter-derived C contributes less than 1% to dissolved organic matter (DOM) collected in 5 cm mineral soil depth. Better decomposable ash litter released significantly more (0.50±0.17%) litter carbon than beech litter (0.17±0.07%). All soil layers held in total around 30% of litter-derived carbon, indicating the large retention potential of litter-derived C in the top soil. Interestingly, in mixed (ash and beech litter) treatments we did not find a higher contribution of better decomposable ash-derived carbon in DOM, O horizon or mineral soil. This suggest that the known selective decomposition of better decomposable litter by soil fauna has no or only minor effects on the release and formation of litter-derived DOM and soil organic matter. Overall our experiment showed that 1) litter-derived carbon is of low importance for dissolved organic carbon release and 2) litter of higher decomposability is faster decomposed, but litter diversity does not influence the carbon flow.

  7. Synthesis, Characterization, and Processing of Copper, Indium, and Gallium Dithiocarbamates for Energy Conversion Applications

    NASA Technical Reports Server (NTRS)

    Duraj, S. A.; Duffy, N. V.; Hepp, A. F.; Cowen, J. E.; Hoops, M. D.; Brothrs, S. M.; Baird, M. J.; Fanwick, P. E.; Harris, J. D.; Jin, M. H.-C.

    2009-01-01

    Ten dithiocarbamate complexes of indium(III) and gallium(III) have been prepared and characterized by elemental analysis, infrared spectra and melting point. Each complex was decomposed thermally and its decomposition products separated and identified with the combination of gas chromatography/mass spectrometry. Their potential utility as photovoltaic materials precursors was assessed. Bis(dibenzyldithiocarbamato)- and bis(diethyldithiocarbamato)copper(II), Cu(S2CN(CH2C6H5)2)2 and Cu(S2CN(C2H5)2)2 respectively, have also been examined for their suitability as precursors for copper sulfides for the fabrication of photovoltaic materials. Each complex was decomposed thermally and the products analyzed by GC/MS, TGA and FTIR. The dibenzyl derivative complex decomposed at a lower temperature (225-320 C) to yield CuS as the product. The diethyl derivative complex decomposed at a higher temperature (260-325 C) to yield Cu2S. No Cu containing fragments were noted in the mass spectra. Unusual recombination fragments were observed in the mass spectra of the diethyl derivative. Tris(bis(phenylmethyl)carbamodithioato-S,S'), commonly referred to as tris(N,N-dibenzyldithiocarbamato)indium(III), In(S2CNBz2)3, was synthesized and characterized by single crystal X-ray crystallography. The compound crystallizes in the triclinic space group P1(bar) with two molecules per unit cell. The material was further characterized using a novel analytical system employing the combined powers of thermogravimetric analysis, gas chromatography/mass spectrometry, and Fourier transform infrared (FT-IR) spectroscopy to investigate its potential use as a precursor for the chemical vapor deposition (CVD) of thin film materials for photovoltaic applications. Upon heating, the material thermally decomposes to release CS2 and benzyl moieties in to the gas phase, resulting in bulk In2S3. Preliminary spray CVD experiments indicate that In(S2CNBz2)3 decomposed on a Cu substrate reacts to produce stoichiometric CuInS2 films.

  8. Advanced Numerical Methods for Computing Statistical Quantities of Interest from Solutions of SPDES

    DTIC Science & Technology

    2012-01-19

    and related optimization problems; developing numerical methods for option pricing problems in the presence of random arbitrage return. 1. Novel...equations (BSDEs) are connected to nonlinear partial differen- tial equations and non-linear semigroups, to the theory of hedging and pricing of contingent...the presence of random arbitrage return [3] We consider option pricing problems when we relax the condition of no arbitrage in the Black- Scholes

  9. Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Luan, X.

    2017-12-01

    Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.

  10. Fatigue crack growth model RANDOM2 user manual, appendix 1

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    The FORTRAN program RANDOM2 is documented. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Included in this user manual are details regarding the theoretical background of RANDOM2, input data, instructions and a sample problem illustrating the use of RANDOM2. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B includes photocopies of the actual computer printout corresponding to the sample problem. Appendices C and D detail the IMSL, Ver. 10(1), subroutines and functions called by RANDOM2 and a SAS/GRAPH(2) program that can be used to plot both the probability density function (p.d.f.) and the cumulative distribution function (c.d.f.).

  11. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  12. Using Mid Infrared Spectroscopy to Predict the Decomposability of Soil Organic Matter Stored in Arctic Tundra Soils

    USDA-ARS?s Scientific Manuscript database

    The large amounts of organic matter stored in permafrost-region soils are preserved in a relatively undecomposed state by the cold and wet environmental conditions limiting decomposer activity. With pending climate changes and the potential for warming of Arctic soils, there is a need to better unde...

  13. Draft genome sequence of the white-rot fungus Obba rivulosa 3A-2

    Treesearch

    Otto Miettinen; Robert Riley; Kerrie Barry; Daniel Cullen; Ronald P. de Vries; Matthieu Hainaut; Annele Hatakka; Bernard Henrissat; Kristiina Hilden; Rita Kuo; Kurt LaButti; Anna Lipzen; Miia R. Makela; Laura Sandor; Joseph W. Spatafora; Igor V. Grigoriev; David S. Hibbett

    2016-01-01

    We report here the first genome sequence of the white-rot fungus Obba rivulsa (Polyporales, Basidiomycota), a polypore known for its lignin-decomposing ability. The genome is based on the homokaryon 3A-2 originating in Finland. The genome is typical in size and carbohydrate active enzyme (CAZy) content for wood-decomposing basidiomycetes.

  14. Environmental Influences on Well-Being: A Dyadic Latent Panel Analysis of Spousal Similarity

    ERIC Educational Resources Information Center

    Schimmack, Ulrich; Lucas, Richard E.

    2010-01-01

    This article uses dyadic latent panel analysis (DLPA) to examine environmental influences on well-being. DLPA requires longitudinal dyadic data. It decomposes the observed variance of both members of a dyad into a trait, state, and an error component. Furthermore, state variance is decomposed into initial and new state variance. Total observed…

  15. Understanding E-Learning Adoption among Brazilian Universities: An Application of the Decomposed Theory of Planned Behavior

    ERIC Educational Resources Information Center

    Dos Santos, Luiz Miguel Renda; Okazaki, Shintaro

    2013-01-01

    This study sheds light on the organizational dimensions underlying e-learning adoption among Brazilian universities. We propose an organizational e-learning adoption model based on the decomposed theory of planned behavior (TPB). A series of hypotheses are posited with regard to the relationships among the proposed constructs. The model is…

  16. Dust to dust - How a human corpse decomposes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vass, Arpad Alexander

    2010-01-01

    After death, the human body decomposes through four stages. The final, skeleton stage may be reached as quickly as two weeks or as slowly as two years, depending on temperature, humidity and other environmental conditions where the body lies. Dead bodies emit a surprising array of chemicals, from benzene to freon, which can help forensic scientists find clandestine graves.

  17. Chemical vapor deposition of group IIIB metals

    DOEpatents

    Erbil, A.

    1989-11-21

    Coatings of Group IIIB metals and compounds thereof are formed by chemical vapor deposition, in which a heat decomposable organometallic compound of the formula given in the patent where M is a Group IIIB metal, such as lanthanum or yttrium and R is a lower alkyl or alkenyl radical containing from 2 to about 6 carbon atoms, with a heated substrate which is above the decomposition temperature of the organometallic compound. The pure metal is obtained when the compound of the formula 1 is the sole heat decomposable compound present and deposition is carried out under nonoxidizing conditions. Intermetallic compounds such as lanthanum telluride can be deposited from a lanthanum compound of formula 1 and a heat decomposable tellurium compound under nonoxidizing conditions.

  18. WELDING PROCESS

    DOEpatents

    Zambrow, J.; Hausner, H.

    1957-09-24

    A method of joining metal parts for the preparation of relatively long, thin fuel element cores of uranium or alloys thereof for nuclear reactors is described. The process includes the steps of cleaning the surfaces to be jointed, placing the sunfaces together, and providing between and in contact with them, a layer of a compound in finely divided form that is decomposable to metal by heat. The fuel element members are then heated at the contact zone and maintained under pressure during the heating to decompose the compound to metal and sinter the members and reduced metal together producing a weld. The preferred class of decomposable compounds are the metal hydrides such as uranium hydride, which release hydrogen thus providing a reducing atmosphere in the vicinity of the welding operation.

  19. Catalytic cartridge SO.sub.3 decomposer

    DOEpatents

    Galloway, Terry R.

    1982-01-01

    A catalytic cartridge surrounding a heat pipe driven by a heat source is utilized as a SO.sub.3 decomposer for thermochemical hydrogen production. The cartridge has two embodiments, a cross-flow cartridge and an axial flow cartridge. In the cross-flow cartridge, SO.sub.3 gas is flowed through a chamber and incident normally to a catalyst coated tube extending through the chamber, the catalyst coated tube surrounding the heat pipe. In the axial-flow cartridge, SO.sub.3 gas is flowed through the annular space between concentric inner and outer cylindrical walls, the inner cylindrical wall being coated by a catalyst and surrounding the heat pipe. The modular cartridge decomposer provides high thermal efficiency, high conversion efficiency, and increased safety.

  20. Chemical vapor deposition of group IIIB metals

    DOEpatents

    Erbil, Ahmet

    1989-01-01

    Coatings of Group IIIB metals and compounds thereof are formed by chemical vapor deposition, in which a heat decomposable organometallic compound of the formula (I) ##STR1## where M is a Group IIIB metal, such as lanthanum or yttrium and R is a lower alkyl or alkenyl radical containing from 2 to about 6 carbon atoms, with a heated substrate which is above the decomposition temperature of the organometallic compound. The pure metal is obtained when the compound of the formula I is the sole heat decomposable compound present and deposition is carried out under nonoxidizing conditions. Intermetallic compounds such as lanthanum telluride can be deposited from a lanthanum compound of formula I and a heat decomposable tellurium compound under nonoxidizing conditions.

  1. Method for forming hermetic seals

    NASA Technical Reports Server (NTRS)

    Gallagher, Brian D.

    1987-01-01

    The firmly adherent film of bondable metal, such as silver, is applied to the surface of glass or other substrate by decomposing a layer of solution of a thermally decomposable metallo-organic deposition (MOD) compound such as silver neodecanoate in xylene. The MOD compound thermally decomposes into metal and gaseous by-products. Sealing is accomplished by depositing a layer of bonding metal, such as solder or a brazing alloy, on the metal film and then forming an assembly with another high melting point metal surface such as a layer of Kovar. When the assembly is heated above the temperature of the solder, the solder flows, wets the adjacent surfaces and forms a hermetic seal between the metal film and metal surface when the assembly cools.

  2. Artificial Epigenetic Networks: Automatic Decomposition of Dynamical Control Tasks Using Topological Self-Modification.

    PubMed

    Turner, Alexander P; Caves, Leo S D; Stepney, Susan; Tyrrell, Andy M; Lones, Michael A

    2017-01-01

    This paper describes the artificial epigenetic network, a recurrent connectionist architecture that is able to dynamically modify its topology in order to automatically decompose and solve dynamical problems. The approach is motivated by the behavior of gene regulatory networks, particularly the epigenetic process of chromatin remodeling that leads to topological change and which underlies the differentiation of cells within complex biological organisms. We expected this approach to be useful in situations where there is a need to switch between different dynamical behaviors, and do so in a sensitive and robust manner in the absence of a priori information about problem structure. This hypothesis was tested using a series of dynamical control tasks, each requiring solutions that could express different dynamical behaviors at different stages within the task. In each case, the addition of topological self-modification was shown to improve the performance and robustness of controllers. We believe this is due to the ability of topological changes to stabilize attractors, promoting stability within a dynamical regime while allowing rapid switching between different regimes. Post hoc analysis of the controllers also demonstrated how the partitioning of the networks could provide new insights into problem structure.

  3. Quantum Metropolis sampling.

    PubMed

    Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F

    2011-03-03

    The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.

  4. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  5. Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network

    NASA Astrophysics Data System (ADS)

    Fallahpour, R.; Chakouvari, S.; Askari, H.

    2015-03-01

    In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.

  6. Behavioral Family Intervention for Children with Developmental Disabilities and Behavioral Problems

    ERIC Educational Resources Information Center

    Roberts, Clare; Mazzucchelli, Trevor; Studman, Lisa; Sanders, Matthew R.

    2006-01-01

    The outcomes of a randomized clinical trial of a new behavioral family intervention, Stepping Stones Triple P, for preschoolers with developmental and behavior problems are presented. Forty-eight children with developmental disabilities participated, 27 randomly allocated to an intervention group and 20 to a wait-list control group. Parents…

  7. General stochastic variational formulation for the oligopolistic market equilibrium problem with excesses

    NASA Astrophysics Data System (ADS)

    Barbagallo, Annamaria; Di Meglio, Guglielmo; Mauro, Paolo

    2017-07-01

    The aim of the paper is to study, in a Hilbert space setting, a general random oligopolistic market equilibrium problem in presence of both production and demand excesses and to characterize the random Cournot-Nash equilibrium principle by means of a stochastic variational inequality. Some existence results are presented.

  8. Effectiveness of a Parent Training Program in (Pre)Adolescence: Evidence from a Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Leijten, Patty; Overbeek, Geertjan; Janssens, Jan M. A. M.

    2012-01-01

    The present randomized controlled trial examined the effectiveness of the parent training program Parents and Children Talking Together (PCTT) for parents with children in the preadolescent period who experience parenting difficulties. The program is focused on reducing child problem behavior by improving parents' communication and problem solving…

  9. The Development of an Internet-Based Treatment for Problem Gamblers and Concerned Significant Others: A Pilot Randomized Controlled Trial.

    PubMed

    Nilsson, Anders; Magnusson, Kristoffer; Carlbring, Per; Andersson, Gerhard; Gumpert, Clara Hellner

    2018-06-01

    Problem gambling creates significant harm for the gambler and for concerned significant others (CSOs). While several studies have investigated the effects of individual cognitive behavioral therapy (CBT) for problem gambling, less is known about the effects of involving CSOs in treatment. Behavioral couples therapy (BCT) has shown promising results when working with substance use disorders by involving both the user and a CSO. This pilot study investigated BCT for problem gambling, as well as the feasibility of performing a larger scale randomized controlled trial. 36 participants, 18 gamblers and 18 CSOs, were randomized to either BCT or individual CBT for the gambler. Both interventions were Internet-delivered self-help interventions with therapist support. Both groups of gamblers improved on all outcome measures, but there were no differences between the groups. The CSOs in the BCT group lowered their scores on anxiety and depression more than the CSOs of those randomized to the individual CBT group did. The implications of the results and the feasibility of the trial are discussed.

  10. Soil fauna and leaf species, but not species diversity, affect initial soil erosion in a subtropical forest plantation

    NASA Astrophysics Data System (ADS)

    Seitz, Steffen; Goebes, Philipp; Assmann, Thorsten; Schuldt, Andreas; Scholten, Thomas

    2017-04-01

    In subtropical parts of China, high rainfall intensities cause continuous soil losses and thereby provoke severe harms to ecosystems. In woodlands, it is not the tree canopy, but mostly an intact forest floor that provides protection from soil erosion. Although the protective role of leaf litter covers against soil losses is known for a long time, little research has been conducted on the processes involved. For instance, the role of different leaf species and leaf species diversity has been widely disregarded. Furthermore, the impact of soil meso- and macrofauna within the litter layer on soil losses remains unclear. To investigate how leaf litter species and diversity as well as soil meso- and macrofauna affect sediment discharge in a subtropical forest ecosystem, a field experiment was carried out in Xingangshan, Jiangxi Province, PR China (BEF China). A full-factorial random design with 96 micro-scale runoff plots and seven domestic leaf species in three diversity levels and a bare ground feature were established. Erosion was initiated with a rainfall simulator. This study confirms that leaf litter cover generally protects forest soils from water erosion (-82 % sediment discharge on leaf covered plots compared to bare plots) and this protection is gradually removed as the litter layer decomposes. Different leaf species showed variable impacts on sediment discharge and thus erosion control. This effect can be related to different leaf habitus, leaf decomposition rates and food preferences of litter decomposing meso- and macrofauna. In our experiment, runoff plots with leaf litter from Machilus thunbergii in monoculture showed the highest sediment discharge (68.0 g m-2), whereas plots with Cyclobalanopsis glauca in monoculture showed the smallest rates (7.9 g m-2). At the same time, neither leaf species diversity, nor functional diversity showed any significant influence, only a negative trend could be observed. Nevertheless, the protective effect of the leaf litter layer was influenced by the presence (or absence) of soil meso- and macrofauna. Fauna presence increased soil erosion rates significantly by 58 %. It was assumed that this faunal effect arose from arthropods loosening and processing the soil surface as well as fragmenting and decomposing the protecting leaf litter covers. Thus, effects of this fauna group on sediment discharge have to be considered in soil erosion experiments.

  11. Multicasting for all-optical multifiber networks

    NASA Astrophysics Data System (ADS)

    Kã¶Ksal, Fatih; Ersoy, Cem

    2007-02-01

    All-optical wavelength-routed WDM WANs can support the high bandwidth and the long session duration requirements of the application scenarios such as interactive distance learning or on-line diagnosis of patients simultaneously in different hospitals. However, multifiber and limited sparse light splitting and wavelength conversion capabilities of switches result in a difficult optimization problem. We attack this problem using a layered graph model. The problem is defined as a k-edge-disjoint degree-constrained Steiner tree problem for routing and fiber and wavelength assignment of k multicasts. A mixed integer linear programming formulation for the problem is given, and a solution using CPLEX is provided. However, the complexity of the problem grows quickly with respect to the number of edges in the layered graph, which depends on the number of nodes, fibers, wavelengths, and multicast sessions. Hence, we propose two heuristics layered all-optical multicast algorithm [(LAMA) and conservative fiber and wavelength assignment (C-FWA)] to compare with CPLEX, existing work, and unicasting. Extensive computational experiments show that LAMA's performance is very close to CPLEX, and it is significantly better than existing work and C-FWA for nearly all metrics, since LAMA jointly optimizes routing and fiber-wavelength assignment phases compared with the other candidates, which attack the problem by decomposing two phases. Experiments also show that important metrics (e.g., session and group blocking probability, transmitter wavelength, and fiber conversion resources) are adversely affected by the separation of two phases. Finally, the fiber-wavelength assignment strategy of C-FWA (Ex-Fit) uses wavelength and fiber conversion resources more effectively than the First Fit.

  12. Grain refinement of a nickel and manganese free austenitic stainless steel produced by pressurized solution nitriding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammadzadeh, Roghayeh, E-mail: r_mohammadzadeh@sut.ac.ir; Akbari, Alireza, E-mail: akbari@sut.ac.ir

    2014-07-01

    Prolonged exposure at high temperatures during solution nitriding induces grain coarsening which deteriorates the mechanical properties of high nitrogen austenitic stainless steels. In this study, grain refinement of nickel and manganese free Fe–22.75Cr–2.42Mo–1.17N high nitrogen austenitic stainless steel plates was investigated via a two-stage heat treatment procedure. Initially, the coarse-grained austenitic stainless steel samples were subjected to an isothermal heating at 700 °C to be decomposed into the ferrite + Cr{sub 2}N eutectoid structure and then re-austenitized at 1200 °C followed by water quenching. Microstructure and hardness of samples were characterized using X-ray diffraction, optical and scanning electron microscopy, andmore » micro-hardness testing. The results showed that the as-solution-nitrided steel decomposes non-uniformly to the colonies of ferrite and Cr{sub 2}N nitrides with strip like morphology after isothermal heat treatment at 700 °C. Additionally, the complete dissolution of the Cr{sub 2}N precipitates located in the sample edges during re-austenitizing requires longer times than 1 h. In order to avoid this problem an intermediate nitrogen homogenizing heat treatment cycle at 1200 °C for 10 h was applied before grain refinement process. As a result, the initial austenite was uniformly decomposed during the first stage, and a fine grained austenitic structure with average grain size of about 20 μm was successfully obtained by re-austenitizing for 10 min. - Highlights: • Successful grain refinement of Fe–22.75Cr–2.42Mo–1.17N steel by heat treatment • Using the γ → α + Cr{sub 2}N reaction for grain refinement of a Ni and Mn free HNASS • Obtaining a single phase austenitic structure with average grain size of ∼ 20 μm • Incomplete dissolution of Cr{sub 2}N during re-austenitizing at 1200 °C for long times • Reducing re-austenitizing time by homogenizing treatment before grain refinement.« less

  13. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  14. Functional renormalization group and Kohn-Sham scheme in density functional theory

    NASA Astrophysics Data System (ADS)

    Liang, Haozhao; Niu, Yifei; Hatsuda, Tetsuo

    2018-04-01

    Deriving accurate energy density functional is one of the central problems in condensed matter physics, nuclear physics, and quantum chemistry. We propose a novel method to deduce the energy density functional by combining the idea of the functional renormalization group and the Kohn-Sham scheme in density functional theory. The key idea is to solve the renormalization group flow for the effective action decomposed into the mean-field part and the correlation part. Also, we propose a simple practical method to quantify the uncertainty associated with the truncation of the correlation part. By taking the φ4 theory in zero dimension as a benchmark, we demonstrate that our method shows extremely fast convergence to the exact result even for the highly strong coupling regime.

  15. Shatter cones - An outstanding problem in shock mechanics. [geological impact fracture surface in cratering

    NASA Technical Reports Server (NTRS)

    Milton, D. J.

    1977-01-01

    Shatter cone characteristics are surveyed. Shatter cones, a form of rock fracture in impact structures, apparently form as a shock front interacts with inhomogeneities or discontinuities in the rock. Topics discussed include morphology, conditions of formation, shock pressure of formation, and theories of formation. It is thought that shatter cones are produced within a limited range of shock pressures extending from about 20 to perhaps 250 kbar. Apical angles range from less than 70 deg to over 120 deg. Tentative hypotheses concerning the physical process of shock coning are considered. The range in shock pressures which produce shatter cones might correspond to the range in which shock waves decompose into elastic and deformational fronts.

  16. Vibration energy harvesting with polyphase AC transducers

    NASA Astrophysics Data System (ADS)

    McCullagh, James J.; Scruggs, Jeffrey T.; Asai, Takehiko

    2016-04-01

    Three-phase transduction affords certain advantages in the efficient electromechanical conversion of energy, especially at higher power scales. This paper considers the use of a three-phase electric machine for harvesting energy from vibrations. We consider the use of vector control techniques, which are common in the area of industrial electronics, for optimizing the feedback loops in a stochastically-excited energy harvesting system. To do this, we decompose the problem into two separate feedback loops for direct and quadrature current components, and illustrate how each might be separately optimized to maximize power output. In a simple analytical example, we illustrate how these techniques might be used to gain insight into the tradeoffs in the design of the electronic hardware and the choice of bus voltage.

  17. Segmental Refinement: A Multigrid Technique for Data Locality

    DOE PAGES

    Adams, Mark F.; Brown, Jed; Knepley, Matt; ...

    2016-08-04

    In this paper, we investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. Finally, we present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinementmore » and report performance results with up to 64K cores on a Cray XC30.« less

  18. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  19. Cluster synchronization induced by one-node clusters in networks with asymmetric negative couplings

    NASA Astrophysics Data System (ADS)

    Zhang, Jianbao; Ma, Zhongjun; Zhang, Gang

    2013-12-01

    This paper deals with the problem of cluster synchronization in networks with asymmetric negative couplings. By decomposing the coupling matrix into three matrices, and employing Lyapunov function method, sufficient conditions are derived for cluster synchronization. The conditions show that the couplings of multi-node clusters from one-node clusters have beneficial effects on cluster synchronization. Based on the effects of the one-node clusters, an effective and universal control scheme is put forward for the first time. The obtained results may help us better understand the relation between cluster synchronization and cluster structures of the networks. The validity of the control scheme is confirmed through two numerical simulations, in a network with no cluster structure and in a scale-free network.

  20. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    PubMed

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  1. ℓ p-Norm Multikernel Learning Approach for Stock Market Price Forecasting

    PubMed Central

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ 1-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ p-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ 1-norm multiple support vector regression model. PMID:23365561

  2. Simulation of blood flow through an artificial heart

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chang, I-Dee; Rogers, Stuart E.; Kwak, Dochan

    1991-01-01

    A numerical simulation of the incompressible viscous flow through a prosthetic tilting disk heart valve is presented in order to demonstrate the current capability to model unsteady flows with moving boundaries. Both steady state and unsteady flow calculations are done by solving the incompressible Navier-Stokes equations in 3-D generalized curvilinear coordinates. In order to handle the moving boundary problems, the chimera grid embedding scheme which decomposes a complex computational domain into several simple subdomains is used. An algebraic turbulence model for internal flows is incorporated to reach the physiological values of Reynolds number. Good agreement is obtained between the numerical results and experimental measurements. It is found that the tilting disk valve causes large regions of separated flow, and regions of high shear.

  3. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  4. Newton–Hooke-type symmetry of anisotropic oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, P.M., E-mail: zhpm@impcas.ac.cn; Horvathy, P.A., E-mail: horvathy@lmpt.univ-tours.fr; Laboratoire de Mathématiques et de Physique Théorique, Université de Tours

    2013-06-15

    Rotation-less Newton–Hooke-type symmetry, found recently in the Hill problem, and instrumental for explaining the center-of-mass decomposition, is generalized to an arbitrary anisotropic oscillator in the plane. Conversely, the latter system is shown, by the orbit method, to be the most general one with such a symmetry. Full Newton–Hooke symmetry is recovered in the isotropic case. Star escape from a galaxy is studied as an application. -- Highlights: ► Rotation-less Newton–Hooke (NH) symmetry is generalized to an arbitrary anisotropic oscillator. ► The orbit method is used to find the most general case for rotation-less NH symmetry. ► The NH symmetry ismore » decomposed into Heisenberg algebras based on chiral decomposition.« less

  5. Temporal behavior of the effective diffusion coefficients for transport in heterogeneous saturated aquifers

    NASA Astrophysics Data System (ADS)

    Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.; Hardelauf, H.

    2003-04-01

    When the small scale transport is modeled by a Wiener process and the large scale heterogeneity by a random velocity field, the effective coefficients, Deff, can be decomposed as sums between the local coefficient, D, a contribution of the random advection, Dadv, and a contribution of the randomness of the trajectory of plume center of mass, Dcm: Deff=D+Dadv-Dcm. The coefficient Dadv is similar to that introduced by Taylor in 1921, and more recent works associate it with the thermodynamic equilibrium. The ``ergodic hypothesis'' says that over large time intervals Dcm vanishes and the effect of the heterogeneity is described by Dadv=Deff-D. In this work we investigate numerically the long time behavior of the effective coefficients as well as the validity of the ergodic hypothesis. The transport in every realization of the velocity field is modeled with the Global Random Walk Algorithm, which is able to track as many particles as necessary to achieve a statistically reliable simulation of the process. Averages over realizations are further used to estimate mean coefficients and standard deviations. In order to remain in the frame of most of the theoretical approaches, the velocity field was generated in a linear approximation and the logarithm of the hydraulic conductivity was taken to be exponential decaying correlated with variance equal to 0.1. Our results show that even in these idealized conditions, the effective coefficients tend to asymptotic constant values only when the plume travels thousands of correlations lengths (while the first order theories usually predict Fickian behavior after tens of correlations lengths) and that the ergodicity conditions are still far from being met.

  6. Toward a Better Understanding of the Relationship between Belief in the Paranormal and Statistical Bias: The Potential Role of Schizotypy

    PubMed Central

    Dagnall, Neil; Denovan, Andrew; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2016-01-01

    The present paper examined relationships between schizotypy (measured by the Oxford-Liverpool Inventory of Feelings and Experience; O-LIFE scale brief), belief in the paranormal (assessed via the Revised Paranormal Belief Scale; RPBS) and proneness to statistical bias (i.e., perception of randomness and susceptibility to conjunction fallacy). Participants were 254 volunteers recruited via convenience sampling. Probabilistic reasoning problems appeared framed within both standard and paranormal contexts. Analysis revealed positive correlations between the Unusual Experience (UnExp) subscale of O-LIFE and paranormal belief measures [RPBS full scale, traditional paranormal beliefs (TPB) and new age philosophy]. Performance on standard problems correlated negatively with UnExp and belief in the paranormal (particularly the TPB dimension of the RPBS). Consideration of specific problem types revealed that perception of randomness associated more strongly with belief in the paranormal than conjunction; both problem types related similarly to UnExp. Structural equation modeling specified that belief in the paranormal mediated the indirect relationship between UnExp and statistical bias. For problems presented in a paranormal context a framing effect occurred. Whilst UnExp correlated positively with conjunction proneness (controlling for perception of randomness), there was no association between UnExp and perception of randomness (controlling for conjunction). PMID:27471481

  7. Toward a Better Understanding of the Relationship between Belief in the Paranormal and Statistical Bias: The Potential Role of Schizotypy.

    PubMed

    Dagnall, Neil; Denovan, Andrew; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2016-01-01

    The present paper examined relationships between schizotypy (measured by the Oxford-Liverpool Inventory of Feelings and Experience; O-LIFE scale brief), belief in the paranormal (assessed via the Revised Paranormal Belief Scale; RPBS) and proneness to statistical bias (i.e., perception of randomness and susceptibility to conjunction fallacy). Participants were 254 volunteers recruited via convenience sampling. Probabilistic reasoning problems appeared framed within both standard and paranormal contexts. Analysis revealed positive correlations between the Unusual Experience (UnExp) subscale of O-LIFE and paranormal belief measures [RPBS full scale, traditional paranormal beliefs (TPB) and new age philosophy]. Performance on standard problems correlated negatively with UnExp and belief in the paranormal (particularly the TPB dimension of the RPBS). Consideration of specific problem types revealed that perception of randomness associated more strongly with belief in the paranormal than conjunction; both problem types related similarly to UnExp. Structural equation modeling specified that belief in the paranormal mediated the indirect relationship between UnExp and statistical bias. For problems presented in a paranormal context a framing effect occurred. Whilst UnExp correlated positively with conjunction proneness (controlling for perception of randomness), there was no association between UnExp and perception of randomness (controlling for conjunction).

  8. Linking search space structure, run-time dynamics, and problem difficulty : a step toward demystifying tabu search.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitley, L. Darrell; Howe, Adele E.; Watson, Jean-Paul

    2004-09-01

    Tabu search is one of the most effective heuristics for locating high-quality solutions to a diverse array of NP-hard combinatorial optimization problems. Despite the widespread success of tabu search, researchers have a poor understanding of many key theoretical aspects of this algorithm, including models of the high-level run-time dynamics and identification of those search space features that influence problem difficulty. We consider these questions in the context of the job-shop scheduling problem (JSP), a domain where tabu search algorithms have been shown to be remarkably effective. Previously, we demonstrated that the mean distance between random local optima and the nearestmore » optimal solution is highly correlated with problem difficulty for a well-known tabu search algorithm for the JSP introduced by Taillard. In this paper, we discuss various shortcomings of this measure and develop a new model of problem difficulty that corrects these deficiencies. We show that Taillard's algorithm can be modeled with high fidelity as a simple variant of a straightforward random walk. The random walk model accounts for nearly all of the variability in the cost required to locate both optimal and sub-optimal solutions to random JSPs, and provides an explanation for differences in the difficulty of random versus structured JSPs. Finally, we discuss and empirically substantiate two novel predictions regarding tabu search algorithm behavior. First, the method for constructing the initial solution is highly unlikely to impact the performance of tabu search. Second, tabu tenure should be selected to be as small as possible while simultaneously avoiding search stagnation; values larger than necessary lead to significant degradations in performance.« less

  9. Social problem solving in carers of young people with a first episode of psychosis: a randomized controlled trial.

    PubMed

    McCann, Terence V; Cotton, Sue M; Lubman, Dan I

    2017-08-01

    Caring for young people with first-episode psychosis is difficult and demanding, and has detrimental effects on carers' well-being, with few evidence-based resources available to assist carers to deal with the problems they are confronted with in this situation. We aimed to examine if completion of a self-directed problem-solving bibliotherapy by first-time carers of young people with first-episode psychosis improved their social problem solving compared with carers who only received treatment as usual. A randomized controlled trial was carried out through two early intervention psychosis services in Melbourne, Australia. A sample of 124 carers were randomized to problem-solving bibliotherapy or treatment as usual. Participants were assessed at baseline, 6- and 16-week follow-up. Intent-to-treat analyses were used and showed that recipients of bibliotherapy had greater social problem-solving abilities than those receiving treatment as usual, and these effects were maintained at both follow-up time points. Our findings affirm that bibliotherapy, as a low-cost complement to treatment as usual for carers, had some effects in improving their problem-solving skills when addressing problems related to the care and support of young people with first-episode psychosis. © 2015 The Authors. Early Intervention in Psychiatry published by Wiley Publishing Asia Pty Ltd.

  10. Slow-cycle effects of foliar herbivory alter the nitrogen acquisition and population size of Collembola

    Treesearch

    Mark A. Bradford; Tara Gancos; Christopher J. Frost

    2008-01-01

    In terrestrial systems there is a close relationship between litter quality and the activity and abundance of decomposers. Therefore, the potential exists for aboveground, herbivore-induced changes in foliar chemistry to affect soil decomposer fauna. These herbivore-induced changes in chemistry may persist across growing seasons. While the impacts of such slow-cycle...

  11. A test of the hierarchical model of litter decomposition.

    PubMed

    Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H

    2017-12-01

    Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.

  12. Are leaves that fall from imidacloprid-treated maple trees to control Asian longhorned beetles toxic to non-target decomposer organisms?

    PubMed

    Kreutzweiser, David P; Good, Kevin P; Chartrand, Derek T; Scarr, Taylor A; Thompson, Dean G

    2008-01-01

    The systemic insecticide imidacloprid may be applied to deciduous trees for control of the Asian longhorned beetle, an invasive wood-boring insect. Senescent leaves falling from systemically treated trees contain imidacloprid concentrations that could pose a risk to natural decomposer organisms. We examined the effects of foliar imidacloprid concentrations on decomposer organisms by adding leaves from imidacloprid-treated sugar maple trees to aquatic and terrestrial microcosms under controlled laboratory conditions. Imidacloprid in maple leaves at realistic field concentrations (3-11 mg kg(-1)) did not affect survival of aquatic leaf-shredding insects or litter-dwelling earthworms. However, adverse sublethal effects at these concentrations were detected. Feeding rates by aquatic insects and earthworms were reduced, leaf decomposition (mass loss) was decreased, measurable weight losses occurred among earthworms, and aquatic and terrestrial microbial decomposition activity was significantly inhibited. Results of this study suggest that sugar maple trees systemically treated with imidacloprid to control Asian longhorned beetles may yield senescent leaves with residue levels sufficient to reduce natural decomposition processes in aquatic and terrestrial environments through adverse effects on non-target decomposer organisms.

  13. Screening on oil-decomposing microorganisms and application in organic waste treatment machine.

    PubMed

    Lu, Yi-Tong; Chen, Xiao-Bin; Zhou, Pei; Li, Zhen-Hong

    2005-01-01

    As an oil-decomposable mixture of two bacteria strains (Bacillus sp. and Pseudomonas sp.), Y3 was isolated after 50 d domestication under the condition that oil was used as the limited carbon source. The decomposing rate by Y3 was higher than that by each separate individual strain, indicating a synergistic effect of the two bacteria. Under the conditions that T = 25-40 degrees C, pH = 6-8, HRT (Hydraulic retention time) = 36 h and the oil concentration at 0.1%, Y3 yielded the highest decomposing rate of 95.7%. Y3 was also applied in an organic waste treatment machine and a certain rate of activated bacteria was put into the stuffing. A series of tests including humidity, pH, temperature, C/N rate and oil percentage of the stuffing were carried out to check the efficacy of oil-decomposition. Results showed that the oil content of the stuffing with inoculums was only half of that of the control. Furthermore, the bacteria were also beneficial to maintain the stability of the machine operating. Therefore, the bacteria mixture as well as the machines in this study could be very useful for waste treatment.

  14. Natural image statistics and low-complexity feature selection.

    PubMed

    Vasconcelos, Manuela; Vasconcelos, Nuno

    2009-02-01

    Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.

  15. Adsorption mechanism of SF6 decomposed species on pyridine-like PtN3 embedded CNT: A DFT study

    NASA Astrophysics Data System (ADS)

    Cui, Hao; Zhang, Xiaoxing; Chen, Dachang; Tang, Ju

    2018-07-01

    Metal-Nx embedded CNT have aroused considerable attention in the field of gas interaction due to their strong catalytic behavior, which provides prospective scopes for gas adsorption and sensing. Detecting SF6 decomposed species in certain devices is essential to guarantee their safe operation. In this work, we performed DFT method and simulated the adsorption of three SF6 decomposed gases (SO2, SOF2 and SO2F2) onto the PtN3 embedded CNT surface, in order to shed light on its adsorption ability and sensing mechanism. Results suggest that the CNT embedded with PtN3 center has strong interaction with these gas molecules, leading to high hybridization between Pt dopant and active atoms inner gas molecules. These interactions are assumed to be chemisorption due to the remarkable Ead and QT, thus resulting in dramatic deformations in electronic structure of PtN3-CNT near the Fermi level. Furthermore, the electronic redistribution cause the conductivity increase of proposed material in three systems, based on frontier molecular orbital theory. Our calculations attempt to suggest novel sensing material that are potentially employed in detection of SF6 decomposed components.

  16. CALL FOR PAPERS: Special issue on the random search problem: trends and perspectives

    NASA Astrophysics Data System (ADS)

    da Luz, Marcos G. E.; Grosberg, Alexander Y.; Raposo, Ernesto P.; Viswanathan, Gandhi M.

    2008-11-01

    This is a call for contributions to a special issue of Journal of Physics A: Mathematical and Theoretical dedicated to the subject of the random search problem. The motivation behind this special issue is to summarize in a single comprehensive publication, the main aspects (past and present), latest developments, different viewpoints and the directions being followed in this multidisciplinary field. We hope that such a special issue could become a particularly valuable reference for the broad scientific community working with the general random search problem. The Editorial Board has invited Marcos G E da Luz, Alexander Y Grosberg, Ernesto P Raposo and Gandhi M Viswanathan to serve as Guest Editors for the special issue. The general question of how to optimize the search for specific target objects in either continuous or discrete environments when the information available is limited is of significant importance in a broad range of fields. Representative examples include ecology (animal foraging, dispersion of populations), geology (oil recovery from mature reservoirs), information theory (automated researchers of registers in high-capacity database), molecular biology (proteins searching for their sites, e.g., on DNA ), etc. One reason underlying the richness of the random search problem relates to the `ignorance' of the locations of the randomly located `targets'. A statistical approach to the search problem can deal adequately with incomplete information and so stochastic strategies become advantageous. The general problem of how to search efficiently for randomly located target sites can thus be quantitatively described using the concepts and methods of statistical physics and stochastic processes. Scope Thus far, to the best of our knowledge, no recent textbook or review article in a physics journal has appeared on this topic. This makes a special issue with review and research articles attractive to those interested in acquiring a general introduction to the field. The subject can be approached from the perspective of different fields: ecology, networks, transport problems, molecular biology, etc. The study of the problem is particularly suited to the concepts and methods of statistical physics and stochastic processes; for example, fractals, random walks, anomalous diffusion. Discrete landscapes can be approached via graph theory, random lattices and complex networks. Such topics are regularly discussed in Journal of Physics A: Mathematical and Theoretical. All such aspects of the problem fall within the scope and focus of this special issue on the random search problem: trends and perspectives. Editorial policy All contributions to the special issue will be refereed in accordance with the refereeing policy of the journal. In particular, all research papers will be expected to be original work reporting substantial new results. The issue will also contain a number of review articles by invitation only. The Guest Editors reserve the right to judge whether a contribution fits the scope of the special issue. Guidelines for preparation of contributions We aim to publish the special issue in August 2009. To realize this, the DEADLINE for contributed papers is 15 January 2009. There is a page limit of 15 printed pages (approximately 9000 words) per contribution. For papers exceeding this limit, the Guest Editors reserve the right to request a reduction in length. Further advice on document preparation can be found at www.iop.org/Journals/jphysa. Contributions to the special issue should if possible be submitted electronically by web upload at www.iop.org/Journals/jphysa, or by email to jphysa@iop.org, quoting 'J. Phys. A Special Issue— Random Search Problem'. Please state whether the paper has been invited or is contributed. Submissions should ideally be in standard LaTeX form. Please see the website for further information on electronic submissions. Authors unable to submit electronically may send hard-copy contributions to: Publishing Administrators, Journal of Physics A, Institute of Physics Publishing, Dirac House, Temple Back, Bristol BS1 6BE, UK, enclosing electronic code on CD if available and quoting 'J. Phys. A Special Issue—Random Search Problem'. All contributions should be accompanied by a read-me file or covering letter giving the postal and e-mail addresses for correspondence. The Publishing Office should be notified of any subsequent change of address. This special issue will be published in the paper and online version of the journal. The corresponding author of each contribution will receive a complimentary copy of the issue.

  17. Rational decisions, random matrices and spin glasses

    NASA Astrophysics Data System (ADS)

    Galluccio, Stefano; Bouchaud, Jean-Philippe; Potters, Marc

    We consider the problem of rational decision making in the presence of nonlinear constraints. By using tools borrowed from spin glass and random matrix theory, we focus on the portfolio optimisation problem. We show that the number of optimal solutions is generally exponentially large, and each of them is fragile: rationality is in this case of limited use. In addition, this problem is related to spin glasses with Lévy-like (long-ranged) couplings, for which we show that the ground state is not exponentially degenerate.

  18. Modelling Nonlinear Dynamic Textures using Hybrid DWT-DCT and Kernel PCA with GPU

    NASA Astrophysics Data System (ADS)

    Ghadekar, Premanand Pralhad; Chopade, Nilkanth Bhikaji

    2016-12-01

    Most of the real-world dynamic textures are nonlinear, non-stationary, and irregular. Nonlinear motion also has some repetition of motion, but it exhibits high variation, stochasticity, and randomness. Hybrid DWT-DCT and Kernel Principal Component Analysis (KPCA) with YCbCr/YIQ colour coding using the Dynamic Texture Unit (DTU) approach is proposed to model a nonlinear dynamic texture, which provides better results than state-of-art methods in terms of PSNR, compression ratio, model coefficients, and model size. Dynamic texture is decomposed into DTUs as they help to extract temporal self-similarity. Hybrid DWT-DCT is used to extract spatial redundancy. YCbCr/YIQ colour encoding is performed to capture chromatic correlation. KPCA is applied to capture nonlinear motion. Further, the proposed algorithm is implemented on Graphics Processing Unit (GPU), which comprise of hundreds of small processors to decrease time complexity and to achieve parallelism.

  19. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  20. Fourth-Order Spatial Correlation of Thermal Light

    NASA Astrophysics Data System (ADS)

    Wen, Feng; Zhang, Xun; Xue, Xin-Xin; Sun, Jia; Song, Jian-Ping; Zhang, Yan-Peng

    2014-11-01

    We investigate the fourth-order spatial correlation properties of pseudo-thermal light in the photon counting regime, and apply the Klyshko advanced-wave picture to describe the process of four-photon coincidence counting measurement. We deduce the theory of a proof-of-principle four-photon coincidence counting configuration, and find that if the four randomly radiated photons come from the same radiation area and are indistinguishable in principle, the fourth-order correlation of them is 24 times larger than that when four photons come from different radiation areas. In addition, we also show that the higher-order spatial correlation function can be decomposed into multiple lower-order correlation functions, and the contrast and visibility of low-order correlation peaks are less than those of higher orders, while the resolutions all are identical. This study may be useful for better understanding the four-photon interference and multi-channel correlation imaging.

  1. Fringe-projection profilometry based on two-dimensional empirical mode decomposition.

    PubMed

    Zheng, Suzhen; Cao, Yiping

    2013-11-01

    In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.

  2. Principal regression analysis and the index leverage effect

    NASA Astrophysics Data System (ADS)

    Reigneron, Pierre-Alain; Allez, Romain; Bouchaud, Jean-Philippe

    2011-09-01

    We revisit the index leverage effect, that can be decomposed into a volatility effect and a correlation effect. We investigate the latter using a matrix regression analysis, that we call ‘Principal Regression Analysis' (PRA) and for which we provide some analytical (using Random Matrix Theory) and numerical benchmarks. We find that downward index trends increase the average correlation between stocks (as measured by the most negative eigenvalue of the conditional correlation matrix), and makes the market mode more uniform. Upward trends, on the other hand, also increase the average correlation between stocks but rotates the corresponding market mode away from uniformity. There are two time scales associated to these effects, a short one on the order of a month (20 trading days), and a longer time scale on the order of a year. We also find indications of a leverage effect for sectorial correlations as well, which reveals itself in the second and third mode of the PRA.

  3. Analysis of Decomposition for Structure I Methane Hydrate by Molecular Dynamics Simulation

    NASA Astrophysics Data System (ADS)

    Wei, Na; Sun, Wan-Tong; Meng, Ying-Feng; Liu, An-Qi; Zhou, Shou-Wei; Guo, Ping; Fu, Qiang; Lv, Xin

    2018-05-01

    Under multi-nodes of temperatures and pressures, microscopic decomposition mechanisms of structure I methane hydrate in contact with bulk water molecules have been studied through LAMMPS software by molecular dynamics simulation. Simulation system consists of 482 methane molecules in hydrate and 3027 randomly distributed bulk water molecules. Through analyses of simulation results, decomposition number of hydrate cages, density of methane molecules, radial distribution function for oxygen atoms, mean square displacement and coefficient of diffusion of methane molecules have been studied. A significant result shows that structure I methane hydrate decomposes from hydrate-bulk water interface to hydrate interior. As temperature rises and pressure drops, the stabilization of hydrate will weaken, decomposition extent will go deep, and mean square displacement and coefficient of diffusion of methane molecules will increase. The studies can provide important meanings for the microscopic decomposition mechanisms analyses of methane hydrate.

  4. Lack of Interaction between Sensing-Intuitive Learning Styles and Problem-First versus Information-First Instruction: A Randomized Crossover Trial

    ERIC Educational Resources Information Center

    Cook, David A.; Thompson, Warren G.; Thomas, Kris G.; Thomas, Matthew R.

    2009-01-01

    Background: Adaptation to learning styles has been proposed to enhance learning. Objective: We hypothesized that learners with sensing learning style would perform better using a problem-first instructional method while intuitive learners would do better using an information-first method. Design: Randomized, controlled, crossover trial. Setting:…

  5. Conduct Problems and Peer Rejection in Childhood: A Randomized Trial of the Making Choices and Strong Families Programs

    ERIC Educational Resources Information Center

    Fraser, Mark W.; Day, Steven H.; Galinsky, Maeda J.; Hodges, Vanessa G.; Smokowski, Paul R.

    2004-01-01

    This article discusses the effectiveness of a multicomponent intervention designed to disrupt developmental processes associated with conduct problems and peer rejection in childhood. Compared with 41 children randomized to a wait list control condition, 45 children in an intervention condition received a social skills training program. At the…

  6. Reducing Developmental Risk for Emotional/Behavioral Problems: A Randomized Controlled Trial Examining the Tools for Getting Along Curriculum

    ERIC Educational Resources Information Center

    Daunic, Ann P.; Smith, Stephen W.; Garvan, Cynthia W.; Barber, Brian R.; Becker, Mallory K.; Peters, Christine D.; Taylor, Gregory G.; Van Loan, Christopher L.; Li, Wei; Naranjo, Arlene H.

    2012-01-01

    Researchers have demonstrated that cognitive-behavioral intervention strategies--such as social problem solving--provided in school settings can help ameliorate the developmental risk for emotional and behavioral difficulties. In this study, we report the results of a randomized controlled trial of Tools for Getting Along (TFGA), a social…

  7. Emotion Regulation Enhancement of Cognitive Behavior Therapy for College Student Problem Drinkers: A Pilot Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Ford, Julian D.; Grasso, Damion J.; Levine, Joan; Tennen, Howard

    2018-01-01

    This pilot randomized clinical trial tested an emotion regulation enhancement to cognitive behavior therapy (CBT) with 29 college student problem drinkers with histories of complex trauma and current clinically significant traumatic stress symptoms. Participants received eight face-to-face sessions of manualized Internet-supported CBT for problem…

  8. WWC Review of the Report "Effects of Problem Based Economics on High School Economics Instruction"

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2012

    2012-01-01

    The study described in this report included 128 high school economics teachers from 106 schools in Arizona and California, half of whom were randomly assigned to the "Problem Based Economics Instruction" condition and half of whom were randomly assigned to the comparison condition. High levels of teacher attrition occurred after…

  9. Adaptive phase extraction: incorporating the Gabor transform in the matching pursuit algorithm.

    PubMed

    Wacker, Matthias; Witte, Herbert

    2011-10-01

    Short-time Fourier transform (STFT), Gabor transform (GT), wavelet transform (WT), and the Wigner-Ville distribution (WVD) are just some examples of time-frequency analysis methods which are frequently applied in biomedical signal analysis. However, all of these methods have their individual drawbacks. The STFT, GT, and WT have a time-frequency resolution that is determined by algorithm parameters and the WVD is contaminated by cross terms. In 1993, Mallat and Zhang introduced the matching pursuit (MP) algorithm that decomposes a signal into a sum of atoms and uses a cross-term free pseudo-WVD to generate a data-adaptive power distribution in the time-frequency space. Thus, it solved some of the problems of the GT and WT but lacks phase information that is crucial e.g., for synchronization analysis. We introduce a new time-frequency analysis method that combines the MP with a pseudo-GT. Therefore, the signal is decomposed into a set of Gabor atoms. Afterward, each atom is analyzed with a Gabor analysis, where the time-domain gaussian window of the analysis matches that of the specific atom envelope. A superposition of the single time-frequency planes gives the final result. This is the first time that a complete analysis of the complex time-frequency plane can be performed in a fully data-adaptive and frequency-selective manner. We demonstrate the capabilities of our approach on a simulation and on real-life magnetoencephalogram data.

  10. Multiplicative Multitask Feature Learning

    PubMed Central

    Wang, Xin; Bi, Jinbo; Yu, Shipeng; Sun, Jiangwen; Song, Minghu

    2016-01-01

    We investigate a general framework of multiplicative multitask feature learning which decomposes individual task’s model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An efficient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks. PMID:28428735

  11. Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications

    NASA Astrophysics Data System (ADS)

    Blackburn, Megan Satterfield

    2009-12-01

    Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.

  12. Microbial community assembly and metabolic function during mammalian corpse decomposition

    USGS Publications Warehouse

    Metcalf, Jessica L; Xu, Zhenjiang Zech; Weiss, Sophie; Lax, Simon; Van Treuren, Will; Hyde, Embriette R.; Song, Se Jin; Amir, Amnon; Larsen, Peter; Sangwan, Naseer; Haarmann, Daniel; Humphrey, Greg C; Ackermann, Gail; Thompson, Luke R; Lauber, Christian; Bibat, Alexander; Nicholas, Catherine; Gebert, Matthew J; Petrosino, Joseph F; Reed, Sasha C.; Gilbert, Jack A; Lynne, Aaron M; Bucheli, Sibyl R; Carter, David O; Knight, Rob

    2016-01-01

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in low abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.

  13. Microbial community assembly and metabolic function during mammalian corpse decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metcalf, J. L.; Xu, Z. Z.; Weiss, S.

    2015-12-10

    Vertebrate corpse decomposition provides an important stage in nutrient cycling in most terrestrial habitats, yet microbially mediated processes are poorly understood. Here we combine deep microbial community characterization, community-level metabolic reconstruction, and soil biogeochemical assessment to understand the principles governing microbial community assembly during decomposition of mouse and human corpses on different soil substrates. We find a suite of bacterial and fungal groups that contribute to nitrogen cycling and a reproducible network of decomposers that emerge on predictable time scales. Our results show that this decomposer community is derived primarily from bulk soil, but key decomposers are ubiquitous in lowmore » abundance. Soil type was not a dominant factor driving community development, and the process of decomposition is sufficiently reproducible to offer new opportunities for forensic investigations.« less

  14. Catalytic cartridge SO/sub 3/ decomposer

    DOEpatents

    Galloway, T.R.

    1980-11-18

    A catalytic cartridge surrounding a heat pipe driven by a heat source is utilized as a SO/sub 3/ decomposer for thermochemical hydrogen production. The cartridge has two embodiments, a cross-flow cartridge and an axial flow cartridge. In the cross-flow cartridge, SO/sub 3/ gas is flowed through a chamber and incident normally to a catalyst coated tube extending through the chamber, the catalyst coated tube surrounding the heat pipe. In the axial-flow cartridge, SO/sub 3/ gas is flowed through the annular space between concentric inner and outer cylindrical walls, the inner cylindrical wall being coated by a catalyst and surrounding the heat pipe. The modular cartridge decomposer provides high thermal efficiency, high conversion efficiency, and increased safety. A fusion reactor may be used as the heat source.

  15. Alcoa Pressure Calcination Process for Alumina

    NASA Astrophysics Data System (ADS)

    Sucech, S. W.; Misra, C.

    A new alumina calcination process developed at Alcoa Laboratories is described. Alumina is calcined in two stages. In the first stage, alumina hydrate is heated indirectly to 500°C in a decomposer vessel. Released water is recovered as process steam at 110 psig pressure. Partial transformation of gibbsite to boehmite occurs under hydrothermal conditions of the decomposer. The product from the decomposer containing about 5% LOI is then calcined by direct heating to 850°C to obtain smelting grade alumina. The final product is highly attrition resistant, has a surface area of 50-80 m2/g and a LOI of less than 1%. Accounting for the recovered steam, the effective fuel consumption for the new calcination process is only 1.6 GJ/t A12O3.

  16. Relativistic Causality and Quasi-Orthomodular Algebras

    NASA Astrophysics Data System (ADS)

    Nobili, Renato

    2006-05-01

    The concept of fractionability or decomposability in parts of a physical system has its mathematical counterpart in the lattice--theoretic concept of orthomodularity. Systems with a finite number of degrees of freedom can be decomposed in different ways, corresponding to different groupings of the degrees of freedom. The orthomodular structure of these simple systems is trivially manifest. The problem then arises as to whether the same property is shared by physical systems with an infinite number of degrees of freedom, in particular by the quantum relativistic ones. The latter case was approached several years ago by Haag and Schroer (1962; Haag, 1992) who started from noting that the causally complete sets of Minkowski spacetime form an orthomodular lattice and posed the question of whether the subalgebras of local observables, with topological supports on such subsets, form themselves a corresponding orthomodular lattice. Were it so, the way would be paved to interpreting spacetime as an intrinsic property of a local quantum field algebra. Surprisingly enough, however, the hoped property does not hold for local algebras of free fields with superselection rules. The possibility seems to be instead open if the local currents that govern the superselection rules are driven by gauge fields. Thus, in the framework of local quantum physics, the request for algebraic orthomodularity seems to imply physical interactions! Despite its charm, however, such a request appears plagued by ambiguities and criticities that make of it an ill--posed problem. The proposers themselves, indeed, concluded that the orthomodular correspondence hypothesis is too strong for having a chance of being practicable. Thus, neither the idea was taken seriously by the proposers nor further investigated by others up to a reasonable degree of clarification. This paper is an attempt to re--formulate and well--pose the problem. It will be shown that the idea is viable provided that the algebra of local observables: (1) is considered all over the whole range of its irreducible representations; (2) is widened with the addition of the elements of a suitable intertwining group of automorphisms; (3) the orthomodular correspondence requirement is modified to an extent sufficient to impart a natural topological structure to the intertwined algebra of observables so obtained. A novel scenario then emerges in which local quantum physics appears to provide a general framework for non--perturbative quantum field dynamics.

  17. Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods.

    PubMed

    Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar

    2017-04-22

    Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

  18. Color opponent receptive fields self-organize in a biophysical model of visual cortex via spike-timing dependent plasticity

    PubMed Central

    Eguchi, Akihiro; Neymotin, Samuel A.; Stringer, Simon M.

    2014-01-01

    Although many computational models have been proposed to explain orientation maps in primary visual cortex (V1), it is not yet known how similar clusters of color-selective neurons in macaque V1/V2 are connected and develop. In this work, we address the problem of understanding the cortical processing of color information with a possible mechanism of the development of the patchy distribution of color selectivity via computational modeling. Each color input is decomposed into a red, green, and blue representation and transmitted to the visual cortex via a simulated optic nerve in a luminance channel and red–green and blue–yellow opponent color channels. Our model of the early visual system consists of multiple topographically-arranged layers of excitatory and inhibitory neurons, with sparse intra-layer connectivity and feed-forward connectivity between layers. Layers are arranged based on anatomy of early visual pathways, and include a retina, lateral geniculate nucleus, and layered neocortex. Each neuron in the V1 output layer makes synaptic connections to neighboring neurons and receives the three types of signals in the different channels from the corresponding photoreceptor position. Synaptic weights are randomized and learned using spike-timing-dependent plasticity (STDP). After training with natural images, the neurons display heightened sensitivity to specific colors. Information-theoretic analysis reveals mutual information between particular stimuli and responses, and that the information reaches a maximum with fewer neurons in the higher layers, indicating that estimations of the input colors can be done using the output of fewer cells in the later stages of cortical processing. In addition, cells with similar color receptive fields form clusters. Analysis of spiking activity reveals increased firing synchrony between neurons when particular color inputs are presented or removed (ON-cell/OFF-cell). PMID:24659956

  19. NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms.

    PubMed

    Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan

    2014-01-01

    One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available.

  20. Locating and parsing bibliographic references in HTML medical articles

    PubMed Central

    Zou, Jie; Le, Daniel; Thoma, George R.

    2010-01-01

    The set of references that typically appear toward the end of journal articles is sometimes, though not always, a field in bibliographic (citation) databases. But even if references do not constitute such a field, they can be useful as a preprocessing step in the automated extraction of other bibliographic data from articles, as well as in computer-assisted indexing of articles. Automation in data extraction and indexing to minimize human labor is key to the affordable creation and maintenance of large bibliographic databases. Extracting the components of references, such as author names, article title, journal name, publication date and other entities, is therefore a valuable and sometimes necessary task. This paper describes a two-step process using statistical machine learning algorithms, to first locate the references in HTML medical articles and then to parse them. Reference locating identifies the reference section in an article and then decomposes it into individual references. We formulate this step as a two-class classification problem based on text and geometric features. An evaluation conducted on 500 articles drawn from 100 medical journals achieves near-perfect precision and recall rates for locating references. Reference parsing identifies the components of each reference. For this second step, we implement and compare two algorithms. One relies on sequence statistics and trains a Conditional Random Field. The other focuses on local feature statistics and trains a Support Vector Machine to classify each individual word, followed by a search algorithm that systematically corrects low confidence labels if the label sequence violates a set of predefined rules. The overall performance of these two reference-parsing algorithms is about the same: above 99% accuracy at the word level, and over 97% accuracy at the chunk level. PMID:20640222

Top