A connectionist model for diagnostic problem solving
NASA Technical Reports Server (NTRS)
Peng, Yun; Reggia, James A.
1989-01-01
A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
Dynamic mode decomposition for plasma diagnostics and validation.
Taylor, Roy; Kutz, J Nathan; Morgan, Kyle; Nelson, Brian A
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
Dynamic mode decomposition for plasma diagnostics and validation
NASA Astrophysics Data System (ADS)
Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
NASA Technical Reports Server (NTRS)
Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.
2009-01-01
The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.
Functional reasoning in diagnostic problem solving
NASA Technical Reports Server (NTRS)
Sticklen, Jon; Bond, W. E.; Stclair, D. C.
1988-01-01
This work is one facet of an integrated approach to diagnostic problem solving for aircraft and space systems currently under development. The authors are applying a method of modeling and reasoning about deep knowledge based on a functional viewpoint. The approach recognizes a level of device understanding which is intermediate between a compiled level of typical Expert Systems, and a deep level at which large-scale device behavior is derived from known properties of device structure and component behavior. At this intermediate functional level, a device is modeled in three steps. First, a component decomposition of the device is defined. Second, the functionality of each device/subdevice is abstractly identified. Third, the state sequences which implement each function are specified. Given a functional representation and a set of initial conditions, the functional reasoner acts as a consequence finder. The output of the consequence finder can be utilized in diagnostic problem solving. The paper also discussed ways in which this functional approach may find application in the aerospace field.
Model diagnostics in reduced-rank estimation
Chen, Kun
2016-01-01
Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches. PMID:28003860
Model diagnostics in reduced-rank estimation.
Chen, Kun
2016-01-01
Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches.
An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Narasimhan, Sriram; Roychoudhury, Indranil; Daigle, Matthew; Pulido, Belarmino
2013-01-01
Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data.
English and Turkish Pupils' Understanding of Decomposition
ERIC Educational Resources Information Center
Cetin, Gulcan
2007-01-01
This study aimed to describe seventh grade English and Turkish students' levels of understanding of decomposition. Data were analyzed descriptively from the students' written responses to four diagnostic questions about decomposition. Results revealed that the English students had considerably higher sound understanding and lower no understanding…
NASA Astrophysics Data System (ADS)
Lehtola, Susi; Tubman, Norm M.; Whaley, K. Birgitta; Head-Gordon, Martin
2017-10-01
Approximate full configuration interaction (FCI) calculations have recently become tractable for systems of unforeseen size, thanks to stochastic and adaptive approximations to the exponentially scaling FCI problem. The result of an FCI calculation is a weighted set of electronic configurations, which can also be expressed in terms of excitations from a reference configuration. The excitation amplitudes contain information on the complexity of the electronic wave function, but this information is contaminated by contributions from disconnected excitations, i.e., those excitations that are just products of independent lower-level excitations. The unwanted contributions can be removed via a cluster decomposition procedure, making it possible to examine the importance of connected excitations in complicated multireference molecules which are outside the reach of conventional algorithms. We present an implementation of the cluster decomposition analysis and apply it to both true FCI wave functions, as well as wave functions generated from the adaptive sampling CI algorithm. The cluster decomposition is useful for interpreting calculations in chemical studies, as a diagnostic for the convergence of various excitation manifolds, as well as as a guidepost for polynomially scaling electronic structure models. Applications are presented for (i) the double dissociation of water, (ii) the carbon dimer, (iii) the π space of polyacenes, and (iv) the chromium dimer. While the cluster amplitudes exhibit rapid decay with an increasing rank for the first three systems, even connected octuple excitations still appear important in Cr2, suggesting that spin-restricted single-reference coupled-cluster approaches may not be tractable for some problems in transition metal chemistry.
Development of a two-wavelength IR laser absorption diagnostic for propene and ethylene
NASA Astrophysics Data System (ADS)
Parise, T. C.; Davidson, D. F.; Hanson, R. K.
2018-05-01
A two-wavelength infrared laser absorption diagnostic for non-intrusive, simultaneous quantitative measurement of propene and ethylene was developed. To this end, measurements of absorption cross sections of propene and potential interfering species at 10.958 µm were acquired at high-temperatures. When used in conjunction with existing absorption cross-section measurements of ethylene and other species at 10.532 µm, a two-wavelength diagnostic was developed to simultaneously measure propene and ethylene, the two small alkenes found to generally dominate the final decomposition products of many fuel hydrocarbon pyrolysis systems. Measurements of these two species is demonstrated using this two-wavelength diagnostic scheme for propene decomposition between 1360 and 1710 K.
Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios
NASA Technical Reports Server (NTRS)
Helm, B. Robert; Fickas, Stephen
1992-01-01
Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.
About decomposition approach for solving the classification problem
NASA Astrophysics Data System (ADS)
Andrianova, A. A.
2016-11-01
This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
Transportation Network Analysis and Decomposition Methods
DOT National Transportation Integrated Search
1978-03-01
The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...
NASA Astrophysics Data System (ADS)
Kasprzyk, J. R.; Reed, P. M.; Characklis, G. W.; Kirsch, B. R.
2010-12-01
This paper proposes and demonstrates a new interactive framework for sensitivity-informed de Novo programming, in which a learning approach to formulating decision problems can confront the deep uncertainty within water management problems. The framework couples global sensitivity analysis using Sobol’ variance decomposition with multiobjective evolutionary algorithms (MOEAs) to generate planning alternatives and test their robustness to new modeling assumptions and scenarios. We explore these issues within the context of a risk-based water supply management problem, where a city seeks the most efficient use of a water market. The case study examines a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas, using both a 10-year planning horizon and an extreme single-year drought scenario. The city’s water supply portfolio comprises a volume of permanent rights to reservoir inflows and use of a water market through anticipatory thresholds for acquiring transfers of water through optioning and spot leases. Diagnostic information from the Sobol’ variance decomposition is used to create a sensitivity-informed problem formulation testing different decision variable configurations, with tradeoffs for the formulation solved using a MOEA. Subsequent analysis uses the drought scenario to expose tradeoffs between long-term and short-term planning and illustrate the impact of deeply uncertain assumptions on water availability in droughts. The results demonstrate water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market and show how to adaptively improve the value and robustness of our problem formulations by evolving our definition of optimality to discover key tradeoffs.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
A knowledge-based tool for multilevel decomposition of a complex design problem
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.
Szlavik, Robert B
2016-02-01
The characterization of peripheral nerve fiber distributions, in terms of diameter or velocity, is of clinical significance because information associated with these distributions can be utilized in the differential diagnosis of peripheral neuropathies. Electro-diagnostic techniques can be applied to the investigation of peripheral neuropathies and can yield valuable diagnostic information while being minimally invasive. Nerve conduction velocity studies are single parameter tests that yield no detailed information regarding the characteristics of the population of nerve fibers that contribute to the compound-evoked potential. Decomposition of the compound-evoked potential, such that the velocity or diameter distribution of the contributing nerve fibers may be determined, is necessary if information regarding the population of contributing nerve fibers is to be ascertained from the electro-diagnostic study. In this work, a perturbation-based decomposition of compound-evoked potentials is proposed that facilitates determination of the fiber diameter distribution associated with the compound-evoked potential. The decomposition is based on representing the single fiber-evoked potential, associated with each diameter class, as being perturbed by contributions, of varying degree, from all the other diameter class single fiber-evoked potentials. The resultant estimator of the contributing nerve fiber diameter distribution is valid for relatively large separations in diameter classes. It is also useful in situations where the separation between diameter classes is small and the concomitant single fiber-evoked potentials are not orthogonal.
Signal evaluations using singular value decomposition for Thomson scattering diagnostics.
Tojo, H; Yamada, I; Yasuhara, R; Yatsuka, E; Funaba, H; Hatae, T; Hayashi, H; Itami, K
2014-11-01
This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (Te) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.
Signal evaluations using singular value decomposition for Thomson scattering diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tojo, H., E-mail: tojo.hiroshi@jaea.go.jp; Yatsuka, E.; Hatae, T.
2014-11-15
This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (T{sub e}) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.
Structural optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.
1983-01-01
A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.
Optimization by nonhierarchical asynchronous decomposition
NASA Technical Reports Server (NTRS)
Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.
1992-01-01
Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.
Multilevel decomposition of complete vehicle configuration in a parallel computing environment
NASA Technical Reports Server (NTRS)
Bhatt, Vinay; Ragsdell, K. M.
1989-01-01
This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Kolb, M. A.
1987-01-01
A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
2001-10-25
wavelet decomposition of signals and classification using neural network. Inputs to the system are the heart sound signals acquired by a stethoscope in a...Proceedings. pp. 415–418, 1990. [3] G. Ergun, “An intelligent diagnostic system for interpretation of arterpartum fetal heart rate tracings based on ANNs and...AN INTELLIGENT PATTERN RECOGNITION SYSTEM BASED ON NEURAL NETWORK AND WAVELET DECOMPOSITION FOR INTERPRETATION OF HEART SOUNDS I. TURKOGLU1, A
Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence
NASA Astrophysics Data System (ADS)
Hatch, David R.
This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
Three geographic decomposition approaches in transportation network analysis
DOT National Transportation Integrated Search
1980-03-01
This document describes the results of research into the application of geographic decomposition techniques to practical transportation network problems. Three approaches are described for the solution of the traffic assignment problem. One approach ...
Application of Decomposition to Transportation Network Analysis
DOT National Transportation Integrated Search
1976-10-01
This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Expert vs. novice: Problem decomposition/recomposition in engineering design
NASA Astrophysics Data System (ADS)
Song, Ting
The purpose of this research was to investigate the differences of using problem decomposition and problem recomposition among dyads of engineering experts, dyads of engineering seniors, and dyads of engineering freshmen. Fifty participants took part in this study. Ten were engineering design experts, 20 were engineering seniors, and 20 were engineering freshmen. Participants worked in dyads to complete an engineering design challenge within an hour. The entire design process was video and audio recorded. After the design session, members participated in a group interview. This study used protocol analysis as the methodology. Video and audio data were transcribed, segmented, and coded. Two coding systems including the FBS ontology and "levels of the problem" were used in this study. A series of statistical techniques were used to analyze data. Interview data and participants' design sketches also worked as supplemental data to help answer the research questions. By analyzing the quantitative and qualitative data, it was found that students used less problem decomposition and problem recomposition than engineer experts in engineering design. This result implies that engineering education should place more importance on teaching problem decomposition and problem recomposition. Students were found to spend less cognitive effort when considering the problem as a whole and interactions between subsystems than engineer experts. In addition, students were also found to spend more cognitive effort when considering details of subsystems. These results showed that students tended to use dept-first decomposition and experts tended to use breadth-first decomposition in engineering design. The use of Function (F), Behavior (B), and Structure (S) among engineering experts, engineering seniors, and engineering freshmen was compared on three levels. Level 1 represents designers consider the problem as an integral whole, Level 2 represents designers consider interactions between subsystems, and Level 3 represents designers consider details of subsystems. The results showed that students used more S on Level 1 and 3 but they used less F on Level 1 than engineering experts. The results imply that engineering curriculum should improve the teaching of problem definition in engineering design because students need to understand the problem before solving it.
NASA Astrophysics Data System (ADS)
Liu, Peng; Wang, Yanfei
2018-04-01
We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
ERIC Educational Resources Information Center
Song, Ting; Becker, Kurt; Gero, John; DeBerard, Scott; DeBerard, Oenardi; Reeve, Edward
2016-01-01
The authors investigated the differences in using problem decomposition and problem recomposition between dyads of engineering experts, engineering seniors, and engineering freshmen. Participants worked in dyads to complete an engineering design challenge within 1 hour. The entire design process was video and audio recorded. After the design…
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results
NASA Astrophysics Data System (ADS)
Lu, Da; He, Zhihua; Zhang, Huan
2018-01-01
This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
Regular Decompositions for H(div) Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolev, Tzanio; Vassilevski, Panayot
We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.
A unified material decomposition framework for quantitative dual- and triple-energy CT imaging.
Zhao, Wei; Vernekohl, Don; Han, Fei; Han, Bin; Peng, Hao; Yang, Yong; Xing, Lei; Min, James K
2018-04-21
Many clinical applications depend critically on the accurate differentiation and classification of different types of materials in patient anatomy. This work introduces a unified framework for accurate nonlinear material decomposition and applies it, for the first time, in the concept of triple-energy CT (TECT) for enhanced material differentiation and classification as well as dual-energy CT (DECT). We express polychromatic projection into a linear combination of line integrals of material-selective images. The material decomposition is then turned into a problem of minimizing the least-squares difference between measured and estimated CT projections. The optimization problem is solved iteratively by updating the line integrals. The proposed technique is evaluated by using several numerical phantom measurements under different scanning protocols. The triple-energy data acquisition is implemented at the scales of micro-CT and clinical CT imaging with commercial "TwinBeam" dual-source DECT configuration and a fast kV switching DECT configuration. Material decomposition and quantitative comparison with a photon counting detector and with the presence of a bow-tie filter are also performed. The proposed method provides quantitative material- and energy-selective images examining realistic configurations for both DECT and TECT measurements. Compared to the polychromatic kV CT images, virtual monochromatic images show superior image quality. For the mouse phantom, quantitative measurements show that the differences between gadodiamide and iodine concentrations obtained using TECT and idealized photon counting CT (PCCT) are smaller than 8 and 1 mg/mL, respectively. TECT outperforms DECT for multicontrast CT imaging and is robust with respect to spectrum estimation. For the thorax phantom, the differences between the concentrations of the contrast map and the corresponding true reference values are smaller than 7 mg/mL for all of the realistic configurations. A unified framework for both DECT and TECT imaging has been established for the accurate extraction of material compositions using currently available commercial DECT configurations. The novel technique is promising to provide an urgently needed solution for several CT-based diagnostic and therapy applications, especially for the diagnosis of cardiovascular and abdominal diseases where multicontrast imaging is involved. © 2018 American Association of Physicists in Medicine.
Decomposition of timed automata for solving scheduling problems
NASA Astrophysics Data System (ADS)
Nishi, Tatsushi; Wakatake, Masato
2014-03-01
A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.
Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.
Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani
2015-02-01
The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
2014-04-01
Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael
Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)
NASA Astrophysics Data System (ADS)
Dubinskii, Yu A.; Osipenko, A. S.
2000-02-01
Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.
Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl
2007-02-01
Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.
An investigation of the use of temporal decomposition in space mission scheduling
NASA Technical Reports Server (NTRS)
Bullington, Stanley E.; Narayanan, Venkat
1994-01-01
This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-11-06
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-01-01
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ′E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ′E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models. PMID:26541245
A New Domain Decomposition Approach for the Gust Response Problem
NASA Technical Reports Server (NTRS)
Scott, James R.; Atassi, Hafiz M.; Susan-Resiga, Romeo F.
2002-01-01
A domain decomposition method is developed for solving the aerodynamic/aeroacoustic problem of an airfoil in a vortical gust. The computational domain is divided into inner and outer regions wherein the governing equations are cast in different forms suitable for accurate computations in each region. Boundary conditions which ensure continuity of pressure and velocity are imposed along the interface separating the two regions. A numerical study is presented for reduced frequencies ranging from 0.1 to 3.0. It is seen that the domain decomposition approach in providing robust and grid independent solutions.
Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B
1998-01-01
Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Parallel processing for pitch splitting decomposition
NASA Astrophysics Data System (ADS)
Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris
2009-10-01
Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.
Calibration methods influence quantitative material decomposition in photon-counting spectral CT
NASA Astrophysics Data System (ADS)
Curtis, Tyler E.; Roeder, Ryan K.
2017-03-01
Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.
Wiimote Experiments: 3-D Inclined Plane Problem for Reinforcing the Vector Concept
ERIC Educational Resources Information Center
Kawam, Alae; Kouh, Minjoon
2011-01-01
In an introductory physics course where students first learn about vectors, they oftentimes struggle with the concept of vector addition and decomposition. For example, the classic physics problem involving a mass on an inclined plane requires the decomposition of the force of gravity into two directions that are parallel and perpendicular to the…
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
2016-02-01
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
Rotational-path decomposition based recursive planning for spacecraft attitude reorientation
NASA Astrophysics Data System (ADS)
Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying
2018-02-01
The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.
Vasilyeva, Marina; Laski, Elida V; Shen, Chen
2015-10-01
The present study tested the hypothesis that children's fluency with basic number facts and knowledge of computational strategies, derived from early arithmetic experience, predicts their performance on complex arithmetic problems. First-grade students from United States and Taiwan (N = 152, mean age: 7.3 years) were presented with problems that differed in difficulty: single-, mixed-, and double-digit addition. Children's strategy use varied as a function of problem difficulty, consistent with Siegler's theory of strategy choice. The use of decomposition strategy interacted with computational fluency in predicting the accuracy of double-digit addition. Further, the frequency of decomposition and computational fluency fully mediated cross-national differences in accuracy on these complex arithmetic problems. The results indicate the importance of both fluency with basic number facts and the decomposition strategy for later arithmetic performance. (c) 2015 APA, all rights reserved).
Automatic single-image-based rain streaks removal via image decomposition.
Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang
2012-04-01
Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.
Approximation, abstraction and decomposition in search and optimization
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1992-01-01
In this paper, I discuss four different areas of my research. One portion of my research has focused on automatic synthesis of search control heuristics for constraint satisfaction problems (CSPs). I have developed techniques for automatically synthesizing two types of heuristics for CSPs: Filtering functions are used to remove portions of a search space from consideration. Another portion of my research is focused on automatic synthesis of hierarchic algorithms for solving constraint satisfaction problems (CSPs). I have developed a technique for constructing hierarchic problem solvers based on numeric interval algebra. Another portion of my research is focused on automatic decomposition of design optimization problems. We are using the design of racing yacht hulls as a testbed domain for this research. Decomposition is especially important in the design of complex physical shapes such as yacht hulls. Another portion of my research is focused on intelligent model selection in design optimization. The model selection problem results from the difficulty of using exact models to analyze the performance of candidate designs.
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Mcelveen, R. P.; Kolb, M. A.
1986-01-01
A multifaceted decomposition of a nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
Regularization of nonlinear decomposition of spectral x-ray projection images.
Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise
2017-09-01
Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible with the proposed method when the number of incident photons was equal or larger than 10 5 and when the marker concentration was equal or larger than 0.03 g·cm -3 . The proposed method efficiently solves the nonlinear decomposition problem for spectral CT, which opens up new possibilities such as material-specific regularization in the projection domain and a parallelization framework, in which projections are solved in parallel. © 2017 American Association of Physicists in Medicine.
Decomposability and scalability in space-based observatory scheduling
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.
1992-01-01
In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.
Projection decomposition algorithm for dual-energy computed tomography via deep neural network.
Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei
2018-03-15
Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
Distributed Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS
Problem decomposition by mutual information and force-based clustering
NASA Astrophysics Data System (ADS)
Otero, Richard Edward
The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
Dual energy computed tomography for the head.
Naruto, Norihito; Itoh, Toshihide; Noguchi, Kyo
2018-02-01
Dual energy CT (DECT) is a promising technology that provides better diagnostic accuracy in several brain diseases. DECT can generate various types of CT images from a single acquisition data set at high kV and low kV based on material decomposition algorithms. The two-material decomposition algorithm can separate bone/calcification from iodine accurately. The three-material decomposition algorithm can generate a virtual non-contrast image, which helps to identify conditions such as brain hemorrhage. A virtual monochromatic image has the potential to eliminate metal artifacts by reducing beam-hardening effects. DECT also enables exploration of advanced imaging to make diagnosis easier. One such novel application of DECT is the X-Map, which helps to visualize ischemic stroke in the brain without using iodine contrast medium.
New monitoring by thermogravimetry for radiation degradation of EVA
NASA Astrophysics Data System (ADS)
Boguski, J.; Przybytniak, G.; Łyczko, K.
2014-07-01
The radiation ageing of ethylene vinyl-acetate copolymer (EVA) as the jacket of cable applied in nuclear power plant was carried out by gamma rays irradiation, and the degradation was monitored by a thermo-gravimetric analysis (TGA). The EVA decomposition rate in air by the isothermal at 400 °C decreased with increase of dose and also with decrease of the dose rate. The behavior of EVA jacket of cable indicated that the decomposition rate at 400 °C was reduced with increase of oxidation. The elongation at break by tensile test for the radiation aged EVA was closely related to the decomposition rate at 400 °C; therefore, the TGA might be applied for a diagnostic technique of the cable degradation.
Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi
2017-03-01
Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
NASA Astrophysics Data System (ADS)
Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.
2018-03-01
In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Rixen, Daniel
1996-01-01
We present an optimal preconditioning algorithm that is equally applicable to the dual (FETI) and primal (Balancing) Schur complement domain decomposition methods, and which successfully addresses the problems of subdomain heterogeneities including the effects of large jumps of coefficients. The proposed preconditioner is derived from energy principles and embeds a new coarsening operator that propagates the error globally and accelerates convergence. The resulting iterative solver is illustrated with the solution of highly heterogeneous elasticity problems.
Linear decomposition approach for a class of nonconvex programming problems.
Shen, Peiping; Wang, Chunfeng
2017-01-01
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
A linear decomposition method for large optimization problems. Blueprint for development
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1982-01-01
A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.
Investigation of automated task learning, decomposition and scheduling
NASA Technical Reports Server (NTRS)
Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.
1990-01-01
The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.
NASA Astrophysics Data System (ADS)
Kuo, Wen-Shuo; Chen, Shean-Jen
2012-02-01
Light-exposure-mediated higher temperatures that markedly accelerate the degradation of indocyanine green (ICG) in aqueous solutions by thermal decomposition have been a serious medical problem. In this work, we present the example of using gold nanorods (Au NRs) simultaneously serving as photodynamic and photothermal agents to destroy malignant cells. Au NRs were successfully conjugated with hydrophilic photosensitizer, indocyanine green (ICG), to achieve photodynamic therapy (PDT) and photothermal therapy (PTT). We also demonstrated that Au NRs conjugated with ICG displayed high chemical stability and acted as a promising diagnostic probe. Due to its stability even via higher temperatures mediated by laser irradiation, the combination of PDT and PTT proved to be efficiently killing cancer cells as compared to PDT or PTT treatment alone and enhanced the effectiveness of photodestruction and was demonstrated to enhance its photostability.
NASA Astrophysics Data System (ADS)
Udomsungworagul, A.; Charnsethikul, P.
2018-03-01
This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.
Usability of Immunohistochemistry in Forensic Samples With Varying Decomposition.
Lesnikova, Iana; Schreckenbach, Marc Niclas; Kristensen, Maria Pihlmann; Papanikolaou, Liv Lindegaard; Hamilton-Dutoit, Stephen
2018-05-24
Immunohistochemistry (IHC) is an important diagnostic tool in anatomic and surgical pathology but is used less frequently in forensic pathology. Degradation of tissue because of postmortem decomposition is believed to be a major limiting factor, although it is unclear what impact such degradation actually has on IHC staining validity. This study included 120 forensic autopsy samples of liver, lung, and brain tissues obtained for diagnostic purposes. The time from death to autopsy ranged between 1 and more than 14 days. Samples were prepared using the tissue microarray technique. The antibodies chosen for the study included KL1 (for staining bile duct epithelium), S100 (for staining glial cells and myelin), vimentin (for endothelial cells in cerebral blood vessels), and CD45 (for pulmonary lymphocytes). Slides were evaluated by light microscopy. Immunohistochemistry reactions were scored according to a system based on the extent and intensity of the positive stain. An overall correlation between the postmortem interval and the IHC score for all tissue samples was found. Samples from decedents with a postmortem interval of 1 to 3 days showed positive staining with all antibodies, whereas samples from decedents with a longer postmortem interval showed decreased staining rates. Our results suggest that IHC analysis can be successfully used for postmortem diagnosis in a range of autopsy samples showing lesser degrees of decomposition.
Yi, Cai; Lin, Jianhui; Zhang, Weihua; Ding, Jianming
2015-01-01
As train loads and travel speeds have increased over time, railway axle bearings have become critical elements which require more efficient non-destructive inspection and fault diagnostics methods. This paper presents a novel and adaptive procedure based on ensemble empirical mode decomposition (EEMD) and Hilbert marginal spectrum for multi-fault diagnostics of axle bearings. EEMD overcomes the limitations that often hypothesize about data and computational efforts that restrict the application of signal processing techniques. The outputs of this adaptive approach are the intrinsic mode functions that are treated with the Hilbert transform in order to obtain the Hilbert instantaneous frequency spectrum and marginal spectrum. Anyhow, not all the IMFs obtained by the decomposition should be considered into Hilbert marginal spectrum. The IMFs’ confidence index arithmetic proposed in this paper is fully autonomous, overcoming the major limit of selection by user with experience, and allows the development of on-line tools. The effectiveness of the improvement is proven by the successful diagnosis of an axle bearing with a single fault or multiple composite faults, e.g., outer ring fault, cage fault and pin roller fault. PMID:25970256
Domain Decomposition with Local Mesh Refinement.
1989-08-01
smoothi coefficients, or non-smooth solui ioni,. We eiriplov fromn 1 to 1024 tiles on problems containing irp to 161K (degrees of freedom. Though io... methodology survives such compromises and is even sequentially advantageous in many problems. The domain decomposition algorithms we employ (sertiun 3...iog( I + !J2 it - g i Ol Qunit squiare 1 he (,mai oive i> Hie outward normal. lfie sevoh iih exam pie, from [1. 27] has a smoothi solution, but rapidlY
Task decomposition for a multilimbed robot to work in reachable but unorientable space
NASA Technical Reports Server (NTRS)
Su, Chau; Zheng, Yuan F.
1991-01-01
Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.
[Progress in Raman spectroscopic measurement of methane hydrate].
Xu, Feng; Zhu, Li-hua; Wu, Qiang; Xu, Long-jun
2009-09-01
Complex thermodynamics and kinetics problems are involved in the methane hydrate formation and decomposition, and these problems are crucial to understanding the mechanisms of hydrate formation and hydrate decomposition. However, it was difficult to accurately obtain such information due to the difficulty of measurement since methane hydrate is only stable under low temperature and high pressure condition, and until recent years, methane hydrate has been measured in situ using Raman spectroscopy. Raman spectroscopy, a non-destructive and non-invasive technique, is used to study vibrational modes of molecules. Studies of methane hydrate using Raman spectroscopy have been developed over the last decade. The Raman spectra of CH4 in vapor phase and in hydrate phase are presented in this paper. The progress in the research on methane hydrate formation thermodynamics, formation kinetics, decomposition kinetics and decomposition mechanism based on Raman spectroscopic measurements in the laboratory and deep sea are reviewed. Formation thermodynamic studies, including in situ observation of formation condition of methane hydrate, analysis of structure, and determination of hydrate cage occupancy and hydration numbers by using Raman spectroscopy, are emphasized. In the aspect of formation kinetics, research on variation in hydrate cage amount and methane concentration in water during the growth of hydrate using Raman spectroscopy is also introduced. For the methane hydrate decomposition, the investigation associated with decomposition mechanism, the mutative law of cage occupancy ratio and the formulation of decomposition rate in porous media are described. The important aspects for future hydrate research based on Raman spectroscopy are discussed.
A system decomposition approach to the design of functional observers
NASA Astrophysics Data System (ADS)
Fernando, Tyrone; Trinh, Hieu
2014-09-01
This paper reports a system decomposition that allows the construction of a minimum-order functional observer using a state observer design approach. The system decomposition translates the functional observer design problem to that of a state observer for a smaller decomposed subsystem. Functional observability indices are introduced, and a closed-form expression for the minimum order required for a functional observer is derived in terms of those functional observability indices.
NASA Astrophysics Data System (ADS)
Royle, Samuel H.; Montgomery, Wren; Kounaves, Samuel P.; Sephton, Mark A.
2017-12-01
Three Mars missions have analyzed the composition of surface samples using thermal extraction techniques. The temperatures of decomposition have been used as diagnostic information for the materials present. One compound of great current interest is perchlorate, a relatively recently discovered component of Mars' surface geochemistry that leads to deleterious effects on organic matter during thermal extraction. Knowledge of the thermal decomposition behavior of perchlorate salts is essential for mineral identification and possible avoidance of confounding interactions with organic matter. We have performed a series of experiments which reveal that the hydration state of magnesium perchlorate has a significant effect on decomposition temperature, with differing temperature releases of oxygen corresponding to different perchlorate hydration states (peak of O2 release shifts from 500 to 600°C as the proportion of the tetrahydrate form in the sample increases). Changes in crystallinity/crystal size may also have a secondary effect on the temperature of decomposition, and although these surface effects appear to be minor for our samples, further investigation may be warranted. A less than full appreciation of the hydration state of perchlorate salts during thermal extraction analyses could lead to misidentification of the number and the nature of perchlorate phases present.
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
Nunes, F P; Garcia, Q S
2015-05-01
The study of litter decomposition and nutrient cycling is essential to know native forests structure and functioning. Mathematical models can help to understand the local and temporal litter fall variations and their environmental variables relationships. The objective of this study was test the adequacy of mathematical models for leaf litter decomposition in the Atlantic Forest in southeastern Brazil. We study four native forest sites in Parque Estadual do Rio Doce, a Biosphere Reserve of the Atlantic, which were installed 200 bags of litter decomposing with 20 × 20 cm nylon screen of 2 mm, with 10 grams of litter. Monthly from 09/2007 to 04/2009, 10 litterbags were removed for determination of the mass loss. We compared 3 nonlinear models: 1 - Olson Exponential Model (1963), which considers the constant K, 2 - Model proposed by Fountain and Schowalter (2004), 3 - Model proposed by Coelho and Borges (2005), which considers the variable K through QMR, SQR, SQTC, DMA and Test F. The Fountain and Schowalter (2004) model was inappropriate for this study by overestimating decomposition rate. The decay curve analysis showed that the model with the variable K was more appropriate, although the values of QMR and DMA revealed no significant difference (p > 0.05) between the models. The analysis showed a better adjustment of DMA using K variable, reinforced by the values of the adjustment coefficient (R2). However, convergence problems were observed in this model for estimate study areas outliers, which did not occur with K constant model. This problem can be related to the non-linear fit of mass/time values to K variable generated. The model with K constant shown to be adequate to describe curve decomposition for separately areas and best adjustability without convergence problems. The results demonstrated the adequacy of Olson model to estimate tropical forest litter decomposition. Although use of reduced number of parameters equaling the steps of the decomposition process, no difficulties of convergence were observed in Olson model. So, this model can be used to describe decomposition curves in different types of environments, estimating K appropriately.
Mueller matrix imaging and analysis of cancerous cells
NASA Astrophysics Data System (ADS)
Fernández, A.; Fernández-Luna, J. L.; Moreno, F.; Saiz, J. M.
2017-08-01
Imaging polarimetry is a focus of increasing interest in diagnostic medicine because of its non-invasive nature and its potential for recognizing abnormal tissues. However, handling polarimetric images is not an easy task, and different intermediate steps have been proposed to introduce physical parameters that may be helpful to interpret results. In this work, transmission Mueller matrices (MM) corresponding to cancer cell samples have been experimentally obtained, and three different transformations have been applied: MM-Polar Decomposition, MM-Transformation and MM-Differential Decomposition. Special attention has been paid to diattenuation as a sensitive parameter to identify apoptosis processes induced by cisplatin and etoposide.
F-wave decomposition for time of arrival profile estimation.
Han, Zhixiu; Kong, Xuan
2007-01-01
F-waves are distally recorded muscle responses that result from "backfiring" of motor neurons following stimulation of peripheral nerves. Each F-wave response is a superposition of several motor unit responses (F-wavelets). Initial deflection of the earliest F-wavelet defines the traditional F-wave latency (FWL) and earlier F-wavelet may mask F-wavelets traveling along slower (and possibly diseased) fibers. Unmasking the time of arrival (TOA) of late F-wavelets could improve the diagnostic value of the F-waves. An algorithm for F-wavelet decomposition is presented, followed by results of experimental data analysis.
Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.
Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less
INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P
2012-10-01
It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less
TE/TM decomposition of electromagnetic sources
NASA Technical Reports Server (NTRS)
Lindell, Ismo V.
1988-01-01
Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.
Proceedings for the ICASE Workshop on Heterogeneous Boundary Conditions
NASA Technical Reports Server (NTRS)
Perkins, A. Louise; Scroggs, Jeffrey S.
1991-01-01
Domain Decomposition is a complex problem with many interesting aspects. The choice of decomposition can be made based on many different criteria, and the choice of interface of internal boundary conditions are numerous. The various regions under study may have different dynamical balances, indicating that different physical processes are dominating the flow in these regions. This conference was called in recognition of the need to more clearly define the nature of these complex problems. This proceedings is a collection of the presentations and the discussion groups.
Domain decomposition algorithms and computation fluid dynamics
NASA Technical Reports Server (NTRS)
Chan, Tony F.
1988-01-01
In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.
Aras, N; Altinel, I K; Oommen, J
2003-01-01
In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
Singular value decomposition for the truncated Hilbert transform
NASA Astrophysics Data System (ADS)
Katsevich, A.
2010-11-01
Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.
Primary decomposition of zero-dimensional ideals over finite fields
NASA Astrophysics Data System (ADS)
Gao, Shuhong; Wan, Daqing; Wang, Mingsheng
2009-03-01
A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.
Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun
2018-07-01
This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.
LP and NLP decomposition without a master problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, D.; Lan, B.
We describe a new algorithm for decomposition of linear programs and a class of convex nonlinear programs, together with theoretical properties and some test results. Its most striking feature is the absence of a master problem; the subproblems pass primal and dual proposals directly to one another. The algorithm is defined for multi-stage LPs or NLPs, in which the constraints link the current stage`s variables to earlier stages` variables. This problem class is general enough to include many problem structures that do not immediately suggest stages, such as block diagonal problems. The basic algorithmis derived for two-stage problems and extendedmore » to more than two stages through nested decomposition. The main theoretical result assures convergence, to within any preset tolerance of the optimal value, in a finite number of iterations. This asymptotic convergence result contrasts with the results of limited tests on LPs, in which the optimal solution is apparently found exactly, i.e., to machine accuracy, in a small number of iterations. The tests further suggest that for LPs, the new algorithm is faster than the simplex method applied to the whole problem, as long as the stages are linked loosely; that the speedup over the simpex method improves as the number of stages increases; and that the algorithm is more reliable than nested Dantzig-Wolfe or Benders` methods in its improvement over the simplex method.« less
Domain Decomposition: A Bridge between Nature and Parallel Computers
1992-09-01
B., "Domain Decomposition Algorithms for Indefinite Elliptic Problems," S"IAM Journal of S; cientific and Statistical (’omputing, Vol. 13, 1992, pp...AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...effectively implemented on dis- tributed memory multiprocessors. In 1990 (as reported in Ref. 38 using the tile algo- rithm), a 103,201-unknown 2D elliptic
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling
NASA Technical Reports Server (NTRS)
Rios, Joseph Lucio; Ross, Kevin
2009-01-01
Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-03-01
A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.
Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.
Li, Xingyu; Plataniotis, Konstantinos N
2017-01-01
In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Statistical estimation via convex optimization for trending and performance monitoring
NASA Astrophysics Data System (ADS)
Samar, Sikandar
This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.
Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.
Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews
2015-03-01
This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.
Application of Direct Parallel Methods to Reconstruction and Forecasting Problems
NASA Astrophysics Data System (ADS)
Song, Changgeun
Many important physical processes in nature are represented by partial differential equations. Numerical weather prediction in particular, requires vast computational resources. We investigate the significance of parallel processing technology to the real world problem of atmospheric prediction. In this paper we consider the classic problem of decomposing the observed wind field into the irrotational and nondivergent components. Recognizing the fact that on a limited domain this problem has a non-unique solution, Lynch (1989) described eight different ways to accomplish the decomposition. One set of elliptic equations is associated with the decomposition--this determines the initial nondivergent state for the forecast model. It is shown that the entire decomposition problem can be solved in a fraction of a second using multi-vector processor such as ALLIANT FX/8. Secondly, the barotropic model is used to track hurricanes. Also, one set of elliptic equations is solved to recover the streamfunction from the forecasted vorticity. A 72 h prediction of Elena is made while it is in the Gulf of Mexico. During this time the hurricane executes a dramatic re-curvature that is captured by the model. Furthermore, an improvement in the track prediction results when a simple assimilation strategy is used. This technique makes use of the wind fields in the 24 h period immediately preceding the initial time for the prediction. In this particular application, solutions to systems of elliptic equations are the center of the computational mechanics. We demonstrate that direct, parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to the decomposition, the forecast and adjoint assimilation.
Knowledge-based approach to system integration
NASA Technical Reports Server (NTRS)
Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.
1988-01-01
To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.
A technique for plasma velocity-space cross-correlation
NASA Astrophysics Data System (ADS)
Mattingly, Sean; Skiff, Fred
2018-05-01
An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.
Augmented neural networks and problem structure-based heuristics for the bin-packing problem
NASA Astrophysics Data System (ADS)
Kasap, Nihat; Agarwal, Anurag
2012-08-01
In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.
NASA Astrophysics Data System (ADS)
Lezina, Natalya; Agoshkov, Valery
2017-04-01
Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).
Combinatorial algorithms for design of DNA arrays.
Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A
2002-01-01
Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
Fast Sampling Gas Chromatography (GC) System for Speciation in a Shock Tube
2016-10-31
capture similar ethylene decomposition rates for temperature-dependent shock experiments. (a) Papers published in peer-reviewed journals (N/A for none...3 GC Sampling System Validation Experiments ............................................................................... 5 Ethylene ...results for cold shock experiments, and both techniques capture similar ethylene decomposition rates for temperature-dependent shock experiments. Problem
A Benders based rolling horizon algorithm for a dynamic facility location problem
Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.
2016-06-28
This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less
A structural model decomposition framework for systems health management
NASA Astrophysics Data System (ADS)
Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
A Structural Model Decomposition Framework for Systems Health Management
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino
2013-01-01
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
CP decomposition approach to blind separation for DS-CDMA system using a new performance index
NASA Astrophysics Data System (ADS)
Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss
2014-12-01
In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.
A methodology to find the elementary landscape decomposition of combinatorial optimization problems.
Chicano, Francisco; Whitley, L Darrell; Alba, Enrique
2011-01-01
A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.
MARS-MD: rejection based image domain material decomposition
NASA Astrophysics Data System (ADS)
Bateman, C. J.; Knight, D.; Brandwacht, B.; McMahon, J.; Healy, J.; Panta, R.; Aamir, R.; Rajendran, K.; Moghiseh, M.; Ramyar, M.; Rundle, D.; Bennett, J.; de Ruiter, N.; Smithies, D.; Bell, S. T.; Doesburg, R.; Chernoglazov, A.; Mandalika, V. B. H.; Walsh, M.; Shamshad, M.; Anjomrouz, M.; Atharifard, A.; Vanden Broeke, L.; Bheesette, S.; Kirkbride, T.; Anderson, N. G.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Butler, A. P. H.; Butler, P. H.
2018-05-01
This paper outlines image domain material decomposition algorithms that have been routinely used in MARS spectral CT systems. These algorithms (known collectively as MARS-MD) are based on a pragmatic heuristic for solving the under-determined problem where there are more materials than energy bins. This heuristic contains three parts: (1) splitting the problem into a number of possible sub-problems, each containing fewer materials; (2) solving each sub-problem; and (3) applying rejection criteria to eliminate all but one sub-problem's solution. An advantage of this process is that different constraints can be applied to each sub-problem if necessary. In addition, the result of this process is that solutions will be sparse in the material domain, which reduces crossover of signal between material images. Two algorithms based on this process are presented: the Segmentation variant, which uses segmented material classes to define each sub-problem; and the Angular Rejection variant, which defines the rejection criteria using the angle between reconstructed attenuation vectors.
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Omoragbon, Amen
Although, the Aerospace and Defense (A&D) industry is a significant contributor to the United States' economy, national prestige and national security, it experiences significant cost and schedule overruns. This problem is related to the differences between technology acquisition assessments and aerospace vehicle conceptual design. Acquisition assessments evaluate broad sets of alternatives with mostly qualitative techniques, while conceptual design tools evaluate narrow set of alternatives with multidisciplinary tools. In order for these two fields to communicate effectively, a common platform for both concerns is desired. This research is an original contribution to a three-part solution to this problem. It discusses the decomposition step of an innovation technology and sizing tool generation framework. It identifies complex multidisciplinary system definitions as a bridge between acquisition and conceptual design. It establishes complex multidisciplinary building blocks that can be used to build synthesis systems as well as technology portfolios. It also describes a Graphical User Interface Designed to aid in decomposition process. Finally, it demonstrates an application of the methodology to a relevant acquisition and conceptual design problem posed by the US Air Force.
An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.
Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin
2016-12-01
Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.
ERIC Educational Resources Information Center
Bryant, Peter; Rendu, Alison; Christie, Clare
1999-01-01
Examined whether 5- and 6-year-olds understand that addition and subtraction cancel each other and whether this understanding is based on identity or quantity of addend and subtrahend. Found that children used inversion principle. Six- to eight-year-olds also used inversion and decomposition to solve a + b - (B+1) problems. Concluded that…
NASA Technical Reports Server (NTRS)
Watson, Brian; Kamat, M. P.
1990-01-01
Element-by-element preconditioned conjugate gradient (EBE-PCG) algorithms have been advocated for use in parallel/vector processing environments as being superior to the conventional LDL(exp T) decomposition algorithm for single load cases. Although there may be some advantages in using such algorithms for a single load case, when it comes to situations involving multiple load cases, the LDL(exp T) decomposition algorithm would appear to be decidedly more cost-effective. The authors have outlined an EBE-PCG algorithm suitable for multiple load cases and compared its effectiveness to the highly efficient LDL(exp T) decomposition scheme. The proposed algorithm offers almost no advantages over the LDL(exp T) algorithm for the linear problems investigated on the Alliant FX/8. However, there may be some merit in the algorithm in solving nonlinear problems with load incrementation, but that remains to be investigated.
An Ensemble Multilabel Classification for Disease Risk Prediction
Liu, Wei; Zhao, Hongling; Zhang, Chaoyang
2017-01-01
It is important to identify and prevent disease risk as early as possible through regular physical examinations. We formulate the disease risk prediction into a multilabel classification problem. A novel Ensemble Label Power-set Pruned datasets Joint Decomposition (ELPPJD) method is proposed in this work. First, we transform the multilabel classification into a multiclass classification. Then, we propose the pruned datasets and joint decomposition methods to deal with the imbalance learning problem. Two strategies size balanced (SB) and label similarity (LS) are designed to decompose the training dataset. In the experiments, the dataset is from the real physical examination records. We contrast the performance of the ELPPJD method with two different decomposition strategies. Moreover, the comparison between ELPPJD and the classic multilabel classification methods RAkEL and HOMER is carried out. The experimental results show that the ELPPJD method with label similarity strategy has outstanding performance. PMID:29065647
Students' Understanding of Acid, Base and Salt Reactions in Qualitative Analysis.
ERIC Educational Resources Information Center
Tan, Kim-Chwee Daniel; Goh, Ngoh-Khang; Chia, Lian-Sai; Treagust, David F.
2003-01-01
Uses a two-tier, multiple-choice diagnostic instrument to determine (n=915) grade 10 students' understanding of the acid, base, and salt reactions involved in basic qualitative analysis. Reports that many students did not understand the formation of precipitates and the complex salts, acid/salt-base reactions, and thermal decomposition involved in…
Development of parallel algorithms for electrical power management in space applications
NASA Technical Reports Server (NTRS)
Berry, Frederick C.
1989-01-01
The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
ERIC Educational Resources Information Center
Abele, Stephan
2018-01-01
This article deals with a theory-based investigation of the diagnostic problem-solving process in professional contexts. To begin with, a theory of the diagnostic problem-solving process was developed drawing on findings from different professional contexts. The theory distinguishes between four sub-processes of the diagnostic problem-solving…
Information Graphic Classification, Decomposition and Alternative Representation
ERIC Educational Resources Information Center
Gao, Jinglun
2012-01-01
This thesis work is mainly focused on two problems related to improving accessibility of information graphics for visually impaired users. The first problem is automated analysis of information graphics for information extraction and the second problem is multi-modal representations for accessibility. Information graphics are graphical…
Sparse time-frequency decomposition based on dictionary adaptation.
Hou, Thomas Y; Shi, Zuoqiang
2016-04-13
In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin
2018-06-01
Mode mixing resulting from intermittent signals is an annoying problem associated with the local mean decomposition (LMD) method. Based on noise-assisted approach, ensemble local mean decomposition (ELMD) method alleviates the mode mixing issue of LMD to some degree. However, the product functions (PFs) produced by ELMD often contain considerable residual noise, and thus a relatively large number of ensemble trials are required to eliminate the residual noise. Furthermore, since different realizations of Gaussian white noise are added to the original signal, different trials may generate different number of PFs, making it difficult to take ensemble mean. In this paper, a novel method is proposed called complete ensemble local mean decomposition with adaptive noise (CELMDAN) to solve these two problems. The method adds a particular and adaptive noise at every decomposition stage for each trial. Moreover, a unique residue is obtained after separating each PF, and the obtained residue is used as input for the next stage. Two simulated signals are analyzed to illustrate the advantages of CELMDAN in comparison to ELMD and CEEMDAN. To further demonstrate the efficiency of CELMDAN, the method is applied to diagnose faults for rolling bearings in an experimental case and an engineering case. The diagnosis results indicate that CELMDAN can extract more fault characteristic information with less interference than ELMD.
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
Thermochemical generation of hydrogen and carbon dioxide
NASA Technical Reports Server (NTRS)
Lawson, Daniel D. (Inventor); England, Christopher (Inventor)
1984-01-01
Mixing of carbon in the form of high sulfur coal with sulfuric acid reduces the temperature of sulfuric acid decomposition from 830.degree. C. to between 300.degree. C. and 400.degree. C. The low temperature sulfuric acid decomposition is particularly useful in thermal chemical cycles for splitting water to produce hydrogen. Carbon dioxide is produced as a commercially desirable byproduct. Lowering of the temperature for the sulfuric acid decomposition or oxygen release step simplifies equipment requirements, lowers thermal energy input and reduces corrosion problems presented by sulfuric acid at conventional cracking temperatures. Use of high sulfur coal as the source of carbon for the sulfuric acid decomposition provides an environmentally safe and energy efficient utilization of this normally polluting fuel.
The Cognitive Toolkit of Programming--Algorithmic Abstraction, Decomposition-Superposition
ERIC Educational Resources Information Center
Szlávi,Péter; Zsakó, László
2017-01-01
As a programmer when solving a problem, a number of conscious and unconscious cognitive operations are being performed. Problem-solving is a gradual and cyclic activity; as the mind is adjusting the problem to its schemas formed by its previous experiences, the programmer gets closer and closer to understanding and defining the problem. The…
NASA Astrophysics Data System (ADS)
Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao
2017-02-01
Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.
a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data
NASA Astrophysics Data System (ADS)
Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.
2018-04-01
Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.
Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.
Optimal wavelet denoising for smart biomonitor systems
NASA Astrophysics Data System (ADS)
Messer, Sheila R.; Agzarian, John; Abbott, Derek
2001-03-01
Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.
Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization
2010-03-31
optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested
Hybridization of decomposition and local search for multiobjective optimization.
Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto
2014-10-01
Combining ideas from evolutionary algorithms, decomposition approaches, and Pareto local search, this paper suggests a simple yet efficient memetic algorithm for combinatorial multiobjective optimization problems: memetic algorithm based on decomposition (MOMAD). It decomposes a combinatorial multiobjective problem into a number of single objective optimization problems using an aggregation method. MOMAD evolves three populations: 1) population P(L) for recording the current solution to each subproblem; 2) population P(P) for storing starting solutions for Pareto local search; and 3) an external population P(E) for maintaining all the nondominated solutions found so far during the search. A problem-specific single objective heuristic can be applied to these subproblems to initialize the three populations. At each generation, a Pareto local search method is first applied to search a neighborhood of each solution in P(P) to update P(L) and P(E). Then a single objective local search is applied to each perturbed solution in P(L) for improving P(L) and P(E), and reinitializing P(P). The procedure is repeated until a stopping condition is met. MOMAD provides a generic hybrid multiobjective algorithmic framework in which problem specific knowledge, well developed single objective local search and heuristics and Pareto local search methods can be hybridized. It is a population based iterative method and thus an anytime algorithm. Extensive experiments have been conducted in this paper to study MOMAD and compare it with some other state-of-the-art algorithms on the multiobjective traveling salesman problem and the multiobjective knapsack problem. The experimental results show that our proposed algorithm outperforms or performs similarly to the best so far heuristics on these two problems.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.
2011-01-01
Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.
Model-based multiple patterning layout decomposition
NASA Astrophysics Data System (ADS)
Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.
2015-10-01
As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less
Spreadsheets and Bulgarian Goats
ERIC Educational Resources Information Center
Sugden, Steve
2012-01-01
We consider a problem appearing in an Australian Mathematics Challenge in 2003. This article considers whether a spreadsheet might be used to model this problem, thus allowing students to explore its structure within the spreadsheet environment. It then goes on to reflect on some general principles of problem decomposition when the final goal is a…
Distributed Cooperation Solution Method of Complex System Based on MAS
NASA Astrophysics Data System (ADS)
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
A non-invasive implementation of a mixed domain decomposition method for frictional contact problems
NASA Astrophysics Data System (ADS)
Oumaziz, Paul; Gosselet, Pierre; Boucard, Pierre-Alain; Guinard, Stéphane
2017-11-01
A non-invasive implementation of the Latin domain decomposition method for frictional contact problems is described. The formulation implies to deal with mixed (Robin) conditions on the faces of the subdomains, which is not a classical feature of commercial software. Therefore we propose a new implementation of the linear stage of the Latin method with a non-local search direction built as the stiffness of a layer of elements on the interfaces. This choice enables us to implement the method within the open source software Code_Aster, and to derive 2D and 3D examples with similar performance as the standard Latin method.
Coherent mode decomposition using mixed Wigner functions of Hermite-Gaussian beams.
Tanaka, Takashi
2017-04-15
A new method of coherent mode decomposition (CMD) is proposed that is based on a Wigner-function representation of Hermite-Gaussian beams. In contrast to the well-known method using the cross spectral density (CSD), it directly determines the mode functions and their weights without solving the eigenvalue problem. This facilitates the CMD of partially coherent light whose Wigner functions (and thus CSDs) are not separable, in which case the conventional CMD requires solving an eigenvalue problem with a large matrix and thus is numerically formidable. An example is shown regarding the CMD of synchrotron radiation, one of the most important applications of the proposed method.
Decomposition of Time Scales in Linear Systems and Markovian Decision Processes.
1980-11-01
this research. I, 3 iv U TABLE OF CONTENTS *Chapter Page *-1. INTRODUCTION .................................................. 1 2. EIGENSTRUCTTJRE...Components ..... o....... 16 2.4. Ordering of State Variables.. ......... ........ 20 2.5. Example - 8th Order Power System Model................ 22 3 ...results. In Chapter 3 we consider the time scale decomposition of singularly perturbed systems. For this problem (1.1) takes the form 12 + u (1.4) 2
ERIC Educational Resources Information Center
Humphreys, Patrick; Wisudha, Ayleen
As a demonstration of the application of heuristic devices to decision-theoretical techniques, an interactive computer program known as MAUD (Multiattribute Utility Decomposition) has been designed to support decision or choice problems that can be decomposed into component factors, or to act as a tool for investigating the microstructure of a…
Acceleration of aircraft-level Traffic Flow Management
NASA Astrophysics Data System (ADS)
Rios, Joseph Lucio
This dissertation describes novel approaches to solving large-scale, high fidelity, aircraft-level Traffic Flow Management scheduling problems. Depending on the methods employed, solving these problems to optimality can take longer than the length of the planning horizon in question. Research in this domain typically focuses on the quality of the modeling used to describe the problem and the benefits achieved from the optimized solution, often treating computational aspects as secondary or tertiary. The work presented here takes the complementary view and considers the computational aspect as the primary concern. To this end, a previously published model for solving this Traffic Flow Management scheduling problem is used as starting point for this study. The model proposed by Bertsimas and Stock-Patterson is a binary integer program taking into account all major resource capacities and the trajectories of each flight to decide which flights should be held in which resource for what amount of time in order to satisfy all capacity requirements. For large instances, the solve time using state-of-the-art solvers is prohibitive for use within a potential decision support tool. With this dissertation, however, it will be shown that solving can be achieved in reasonable time for instances of real-world size. Five other techniques developed and tested for this dissertation will be described in detail. These are heuristic methods that provide good results. Performance is measured in terms of runtime and "optimality gap." We then describe the most successful method presented in this dissertation: Dantzig-Wolfe Decomposition. Results indicate that a parallel implementation of Dantzig-Wolfe Decomposition optimally solves the original problem in much reduced time and with better integrality and smaller optimality gap than any of the heuristic methods or state-of-the-art, commercial solvers. The solution quality improves in every measureable way as the number of subproblems solved in parallel increases. A maximal decomposition provides the best results of any method tested. The convergence qualities of Dantzig-Wolfe Decomposition have been criticized in the past, so we examine what makes the Bertsimas-Stock Patterson model so amenable to use of this method. These mathematical qualities of the model are generalized to provide guidance on other problems that may benefit from massively parallel Dantzig-Wolfe Decomposition. This result, together with the development of the software, and the experimental results indicating the feasibility of real-time, nationwide Traffic Flow Management scheduling represent the major contributions of this dissertation.
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
NASA Astrophysics Data System (ADS)
Cork, Christopher; Miloslavsky, Alexander; Friedberg, Paul; Luk-Pat, Gerry
2013-04-01
Lithographers had hoped that single patterning would be enabled at the 20nm node by way of EUV lithography. However, due to delays in EUV readiness, double patterning with 193i lithography is currently relied upon for volume production for the 20nm node's metal 1 layer. At the 14nm and likely at the 10nm node, LE-LE-LE triple patterning technology (TPT) is one of the favored options [1,2] for patterning local interconnect and Metal 1 layers. While previous research has focused on TPT for contact mask, metal layers offer new challenges and opportunities, in particular the ability to decompose design polygons across more than one mask. The extra flexibility offered by the third mask and ability to leverage polygon stitching both serve to improve compliance. However, ensuring TPT compliance - the task of finding a 3-color mask decomposition for a design - is still a difficult task. Moreover, scalability concerns multiply the difficulty of triple patterning decomposition which is an NP-complete problem. Indeed previous work shows that network sizes above a few thousand nodes or polygons start to take significantly longer times to compute [3], making full chip decomposition for arbitrary layouts impractical. In practice Metal 1 layouts can be considered as two separate problem domains, namely: decomposition of standard cells and decomposition of IP blocks. Standard cells typically include only a few 10's of polygons and should be amenable to fast decomposition. Successive design iterations should resolve compliance issues and improve packing density. Density improvements are multiplied repeatedly as standard cells are placed multiple times. IP blocks, on the other hand, may involve very large networks. This paper evaluates multiple approaches to triple patterning decomposition for the Metal 1 layer. The benefits of polygon stitching, in particular, the ability to resolve commonly encountered non-compliant layout configurations and improve packing density, are weighed against the increased difficulty in finding an optimized, legal decomposition and coping with the increased scalability challenges.
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
Spisák, Sándor; Molnár, Béla; Galamb, Orsolya; Sipos, Ferenc; Tulassay, Zsolt
2007-08-12
The confirmation of mRNA expression studies by protein chips is of high recent interest due to the widespread application of expression arrays. In this review the advantages, technical limitations, application fields and the first results of the protein arrays is described. The bottlenecks of the increasing protein array applications are the fast decomposition of proteins, the problem with aspecific binding and the lack of amplification techniques. Today glass slide based printed, SELDI (MS) based, electrophoresis based and tissue microarray based technologies are available. The advantage of the glass slide based chips are the simplicity of their application, and relatively low cost. The SELDI based protein chip technique is applicable to minute amounts of starting material (<1 microg) but it is the most expensive one. The electrophoresis based techniques are still under intensive development. The tissue microarrays can be used for the parallel testing of the sensitivity and specificity of single antibodies on a broad range of histological specimens on a single slide. Protein chips were successfully used for serum tumor marker detection, cancer research, cell physiology studies and for the verification of mRNA expression studies. Protein chips are envisioned to be available for routine diagnostic applications if the ongoing technology development will be successful in increase in sensitivity, specificity, costs reduction and for the reduction of the necessary sample volume.
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.
Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J.C.H.; Lung, H.; Katsumata, Y.
1995-12-01
In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.
Domain decomposition: A bridge between nature and parallel computers
NASA Technical Reports Server (NTRS)
Keyes, David E.
1992-01-01
Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.
NASA Astrophysics Data System (ADS)
Klingbeil, A. E.; Jeffries, J. B.; Davidson, D. F.; Hanson, R. K.
2008-11-01
A two-wavelength, mid-IR optical absorption diagnostic is developed for simultaneous temperature and n-dodecane vapor concentration measurements in an aerosol-laden shock tube. FTIR absorption spectra for the temperature range 323 to 773 K are used to select the two wavelengths (3409.0 and 3432.4 nm). Shock-heated mixtures of n-dodecane vapor in argon are then used to extend absorption cross section data at these wavelengths to 1322 K. The sensor is used to validate a model of the post-evaporation temperature and pressure of shock-heated fuel aerosol, which can ultimately be used for the study of the chemistry of low-vapor-pressure compounds and fuel blends. The signal-to-noise ratio of the temperature and concentration are ˜20 and ˜30, respectively, illustrating the sensitivity of this diagnostic. The good agreement between model and measurement provide confidence in the use of this aerosol shock tube to provide well-known thermodynamic conditions. At high temperatures, pseudo-first-order decomposition rates are extracted from time-resolved concentration measurements, and data from vapor and aerosol shocks are found to be in good agreement. Notably, the n-dodecane concentration measurements exhibit slower decomposition than predicted by models using two published reaction mechanisms, illustrating the need for further kinetic studies of this hydrocarbon. These results demonstrate the potential of multi-wavelength mid-IR laser sensors for hydrocarbon measurements in environments with time-varying temperature and concentration.
Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination
NASA Technical Reports Server (NTRS)
Ryne, Mark S.; Wang, Tseng-Chan
1991-01-01
An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.
Resolvent estimates in homogenisation of periodic problems of fractional elasticity
NASA Astrophysics Data System (ADS)
Cherednichenko, Kirill; Waurick, Marcus
2018-03-01
We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.
NASA Technical Reports Server (NTRS)
Smith, Stephen F.; Pathak, Dhiraj K.
1991-01-01
In this paper, we report work aimed at applying concepts of constraint-based problem structuring and multi-perspective scheduling to an over-subscribed scheduling problem. Previous research has demonstrated the utility of these concepts as a means for effectively balancing conflicting objectives in constraint-relaxable scheduling problems, and our goal here is to provide evidence of their similar potential in the context of HST observation scheduling. To this end, we define and experimentally assess the performance of two time-bounded heuristic scheduling strategies in balancing the tradeoff between resource setup time minimization and satisfaction of absolute time constraints. The first strategy considered is motivated by dispatch-based manufacturing scheduling research, and employs a problem decomposition that concentrates local search on minimizing resource idle time due to setup activities. The second is motivated by research in opportunistic scheduling and advocates a problem decomposition that focuses attention on the goal activities that have the tightest temporal constraints. Analysis of experimental results gives evidence of differential superiority on the part of each strategy in different problem solving circumstances. A composite strategy based on recognition of characteristics of the current problem solving state is then defined and tested to illustrate the potential benefits of constraint-based problem structuring and multi-perspective scheduling in over-subscribe scheduling problems.
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-01-01
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-04-27
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.
Kaplan, Daniel M
2010-10-01
The author argues that the well-formulated problem list is essential for both organizing and evaluating diagnostic thinking. He considers evidence of deficiencies in problem lists in the medical record. He observes a trend among medical trainees toward organizing notes in the medical record according to lists of organ systems or medical subspecialties and hypothesizes that system-based documentation may undermine the art of problem formulation and diagnostic synthesis. Citing research linking more sophisticated problem representation with diagnostic success, he suggests that documentation style and clinical reasoning are closely connected and that organ-based documentation may predispose trainees to several varieties of cognitive diagnostic error and deficient synthesis. These include framing error, premature or absent closure, failure to integrate related findings, and failure to recognize the level of diagnostic resolution attained for a given problem. He acknowledges the pitfalls of higher-order diagnostic resolution, including the application of labels unsupported by firm evidence, while maintaining that diagnostic resolution as far as evidence permits is essential to both rational care of patients and rigorous education of learners. He proposes further research, including comparison of diagnostic efficiency between organ- and problem-oriented thinkers. He hypothesizes that the subspecialty-based structure of academic medical services helps perpetuate organ-system-based thinking, and calls on clinical educators to renew their emphasis on the formulation and documentation of complete and precise problem lists and progressively refined diagnoses by trainees.
A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis
NASA Astrophysics Data System (ADS)
Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.
2016-12-01
Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.
On the solutions of fractional order of evolution equations
NASA Astrophysics Data System (ADS)
Morales-Delgado, V. F.; Taneco-Hernández, M. A.; Gómez-Aguilar, J. F.
2017-01-01
In this paper we present a discussion of generalized Cauchy problems in a diffusion wave process, we consider bi-fractional-order evolution equations in the Riemann-Liouville, Liouville-Caputo, and Caputo-Fabrizio sense. Through Fourier transforms and Laplace transform we derive closed-form solutions to the Cauchy problems mentioned above. Similarly, we establish fundamental solutions. Finally, we give an application of the above results to the determination of decompositions of Dirac type for bi-fractional-order equations and write a formula for the moments for the fractional vibration of a beam equation. This type of decomposition allows us to speak of internal degrees of freedom in the vibration of a beam equation.
Decomposition Algorithm for Global Reachability on a Time-Varying Graph
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2010-01-01
A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.
Spreadsheets and Bulgarian goats
NASA Astrophysics Data System (ADS)
Sugden, Steve
2012-10-01
We consider a problem appearing in an Australian Mathematics Challenge in 2003. This article considers whether a spreadsheet might be used to model this problem, thus allowing students to explore its structure within the spreadsheet environment. It then goes on to reflect on some general principles of problem decomposition when the final goal is a successful and lucid spreadsheet implementation.
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
Microsupercomputers: Design and Implementation
1991-03-01
been ported to the DASH hardware. Hardware problems and software problems with DPV itself prevented its use as a debugging tool until recently. Both the...M.PD) [21]. an LU- decomposition program (LU). and a digita! logic simulation prgram 1 Introduction (PTHOR) [28]. The applcations are typical of those
Hybrid Monte Carlo approach to the entanglement entropy of interacting fermions
NASA Astrophysics Data System (ADS)
Drut, Joaquín E.; Porter, William J.
2015-09-01
The Monte Carlo calculation of Rényi entanglement entropies Sn of interacting fermions suffers from a well-known signal-to-noise problem, even for a large number of situations in which the infamous sign problem is absent. A few methods have been proposed to overcome this issue, such as ensemble switching and the use of auxiliary partition-function ratios. Here, we present an approach that builds on the recently proposed free-fermion decomposition method; it incorporates entanglement in the probability measure in a natural way; it takes advantage of the hybrid Monte Carlo algorithm (an essential tool in lattice quantum chromodynamics and other gauge theories with dynamical fermions); and it does not suffer from noise problems. This method displays no sign problem for the same cases as other approaches and is therefore useful for a wide variety of systems. As a proof of principle, we calculate S2 for the one-dimensional, half-filled Hubbard model and compare with results from exact diagonalization and the free-fermion decomposition method.
Performance of the Wavelet Decomposition on Massively Parallel Architectures
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)
2001-01-01
Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
Distributed-Memory Breadth-First Search on Massive Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buluc, Aydin; Beamer, Scott; Madduri, Kamesh
This chapter studies the problem of traversing large graphs using the breadth-first search order on distributed-memory supercomputers. We consider both the traditional level-synchronous top-down algorithm as well as the recently discovered direction optimizing algorithm. We analyze the performance and scalability trade-offs in using different local data structures such as CSR and DCSC, enabling in-node multithreading, and graph decompositions such as 1D and 2D decomposition.
NASA Astrophysics Data System (ADS)
Ford, Neville J.; Connolly, Joseph A.
2009-07-01
We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.
Relaxations to Sparse Optimization Problems and Applications
NASA Astrophysics Data System (ADS)
Skau, Erik West
Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we apply a maximum entropy model, guided by the social media data, to estimate the flooded regions during a 2013 flood in Boulder, CO and show that the results are comparable to those obtained using expert information.
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Roehl, Jan Hendrik; Oberrath, Jens
2016-09-01
``Active plasma resonance spectroscopy'' (APRS) is a widely used diagnostic method to measure plasma parameter like electron density. Measurements with APRS probes in plasmas of a few Pa typically show a broadening of the spectrum due to kinetic effects. To analyze the broadening a general kinetic model in electrostatic approximation based on functional analytic methods has been presented [ 1 ] . One of the main results is, that the system response function Y(ω) is given in terms of the matrix elements of the resolvent of the dynamic operator evaluated for values on the imaginary axis. To determine the response function of a specific probe the resolvent has to be approximated by a huge matrix which is given by a banded block structure. Due to this structure a block based LU decomposition can be implemented. It leads to a solution of Y(ω) which is given only by products of matrices of the inner block size. This LU decomposition allows to analyze the influence of kinetic effects on the broadening and saves memory and calculation time. Gratitude is expressed to the internal funding of Leuphana University.
Bernaldo de Quirós, Yara; Seewald, Jeffrey S.; Sylva, Sean P.; Greer, Bill; Niemeyer, Misty; Bogomolni, Andrea L.; Moore, Michael J.
2013-01-01
Gas bubbles in marine mammals entangled and drowned in gillnets have been previously described by computed tomography, gross examination and histopathology. The absence of bacteria or autolytic changes in the tissues of those animals suggested that the gas was produced peri- or post-mortem by a fast decompression, probably by quickly hauling animals entangled in the net at depth to the surface. Gas composition analysis and gas scoring are two new diagnostic tools available to distinguish gas embolisms from putrefaction gases. With this goal, these methods have been successfully applied to pathological studies of marine mammals. In this study, we characterized the flux and composition of the gas bubbles from bycaught marine mammals in anchored sink gillnets and bottom otter trawls. We compared these data with marine mammals stranded on Cape Cod, MA, USA. Fresh animals or with moderate decomposition (decomposition scores of 2 and 3) were prioritized. Results showed that bycaught animals presented with significantly higher gas scores than stranded animals. Gas composition analyses indicate that gas was formed by decompression, confirming the decompression hypothesis. PMID:24367623
Price schedules coordination for electricity pool markets
NASA Astrophysics Data System (ADS)
Legbedji, Alexis Motto
2002-04-01
We consider the optimal coordination of a class of mathematical programs with equilibrium constraints, which is formally interpreted as a resource-allocation problem. Many decomposition techniques were proposed to circumvent the difficulty of solving large systems with limited computer resources. The considerable improvement in computer architecture has allowed the solution of large-scale problems with increasing speed. Consequently, interest in decomposition techniques has waned. Nonetheless, there is an important class of applications for which decomposition techniques will still be relevant, among others, distributed systems---the Internet, perhaps, being the most conspicuous example---and competitive economic systems. Conceptually, a competitive economic system is a collection of agents that have similar or different objectives while sharing the same system resources. In theory, constructing a large-scale mathematical program and solving it centrally, using currently available computing power can optimize such systems of agents. In practice, however, because agents are self-interested and not willing to reveal some sensitive corporate data, one cannot solve these kinds of coordination problems by simply maximizing the sum of agent's objective functions with respect to their constraints. An iterative price decomposition or Lagrangian dual method is considered best suited because it can operate with limited information. A price-directed strategy, however, can only work successfully when coordinating or equilibrium prices exist, which is not generally the case when a weak duality is unavoidable. Showing when such prices exist and how to compute them is the main subject of this thesis. Among our results, we show that, if the Lagrangian function of a primal program is additively separable, price schedules coordination may be attained. The prices are Lagrange multipliers, and are also the decision variables of a dual program. In addition, we propose a new form of augmented or nonlinear pricing, which is an example of the use of penalty functions in mathematical programming. Applications are drawn from mathematical programming problems of the form arising in electric power system scheduling under competition.
Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh
2014-01-01
Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.
NASA Astrophysics Data System (ADS)
Chen, Dongyue; Lin, Jianhui; Li, Yanping
2018-06-01
Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.
Randomized interpolative decomposition of separated representations
NASA Astrophysics Data System (ADS)
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
The Enhancement of Gas Pressure Diagnostics in the P-ODTX System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Peter C.; Jones, Aaron; Tesillo, Lynda
The One Dimensional Time to Explosion (ODTX) system at the Lawrence Livermore National Laboratory is a useful tool for thermal safety assessment of energetic material. It has been used since 1970s to measure times to explosion, threshold thermal explosion temperature, thermal explosion violence, and determine decomposition kinetic parameters of energetic materials. ODTX data obtained for the last 40 years can be found elsewhere.
NASA Astrophysics Data System (ADS)
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
NASA Astrophysics Data System (ADS)
Kirilovskiy, S. V.; Poplavskaya, T. V.; Tsyryulnikov, I. S.
2016-10-01
This work is aimed at obtaining conversion factors of free stream disturbances from shock wave angle φ, angle of acoustic disturbances distribution θ and Mach number M∞ by solving a problem of interaction of long-wave (with the wavelength λ greater than the model length) free-stream disturbances with a shock wave formed in a supersonic flow around the wedge. Conversion factors at x/λ=0.2 as a ration between amplitude of pressure pulsations on the wedge surface and free stream disturbances amplitude were obtained. Factors of conversion were described by the dependence on angle θ of disturbances distribution, shock wave angle φ and Mach number M∞. These dependences are necessary for solving the problem of mode decomposition of disturbances in supersonic flows in wind tunnels.
Kostic, Z G; Stefanovic, P L; Pavlović, P B
2000-07-10
Thermal plasmas may solve one of the biggest toxic waste disposal problems. The disposal of polychlorinated biphenyls (PCBs) is a long standing problem which will get worse in the coming years, when 180000 tons of PCB-containing wastes are expected to accumulate in Europe (Hot ions break down toxic chemicals, New Scientist, 16 April 1987, p. 24.). The combustion of PCBs in ordinary incinerators (at temperature T approximately 1100 K, as measured near the inner wall of the combustion chamber (European Parliament and Council Directive on Incineration of Waste (COM/99/330), Europe energy, 543, Sept. 17, 1999, 1-23.)) can cause more problems than it solves, because highly toxic dioxins and dibenzofurans are formed if the combustion temperature is too low (T<1400 K). The paper presents a thermodynamic consideration and comparative analysis of PCB decomposition processes in air or argon (+oxygen) thermal plasmas.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Methanol Oxidation on Pt3Sn(111) for Direct Methanol Fuel Cells: Methanol Decomposition.
Lu, Xiaoqing; Deng, Zhigang; Guo, Chen; Wang, Weili; Wei, Shuxian; Ng, Siu-Pang; Chen, Xiangfeng; Ding, Ning; Guo, Wenyue; Wu, Chi-Man Lawrence
2016-05-18
PtSn alloy, which is a potential material for use in direct methanol fuel cells, can efficiently promote methanol oxidation and alleviate the CO poisoning problem. Herein, methanol decomposition on Pt3Sn(111) was systematically investigated using periodic density functional theory and microkinetic modeling. The geometries and energies of all of the involved species were analyzed, and the decomposition network was mapped out to elaborate the reaction mechanisms. Our results indicated that methanol and formaldehyde were weakly adsorbed, and the other derivatives (CHxOHy, x = 1-3, y = 0-1) were strongly adsorbed and preferred decomposition rather than desorption on Pt3Sn(111). The competitive methanol decomposition started with the initial O-H bond scission followed by successive C-H bond scissions, (i.e., CH3OH → CH3O → CH2O → CHO → CO). The Brønsted-Evans-Polanyi relations and energy barrier decomposition analyses identified the C-H and O-H bond scissions as being more competitive than the C-O bond scission. Microkinetic modeling confirmed that the vast majority of the intermediates and products from methanol decomposition would escape from the Pt3Sn(111) surface at a relatively low temperature, and the coverage of the CO residue decreased with an increase in the temperature and decrease in partial methanol pressure.
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
Polar decomposition for attitude determination from vector observations
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.
1993-01-01
This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.
Characteristics of a Cognitive Tool That Helps Students Learn Diagnostic Problem Solving
ERIC Educational Resources Information Center
Danielson, Jared A.; Mills, Eric M.; Vermeer, Pamela J.; Preast, Vanessa A.; Young, Karen M.; Christopher, Mary M.; George, Jeanne W.; Wood, R. Darren; Bender, Holly S.
2007-01-01
Three related studies replicated and extended previous work (J.A. Danielson et al. (2003), "Educational Technology Research and Development," 51(3), 63-81) involving the Diagnostic Pathfinder (dP) (previously Problem List Generator [PLG]), a cognitive tool for learning diagnostic problem solving. In studies 1 and 2, groups of 126 and 113…
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Vanroony, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
The use of the Cholesky decomposition technique is analyzed as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g. as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stablity problems are briefly discussed.
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Van Rooy, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
This report analyzes the use of the modified Cholesky decomposition technique as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g., as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stability problems are briefly discussed.
Lv, Yong; Song, Gangbing
2018-01-01
Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal. PMID:29659510
Yuan, Rui; Lv, Yong; Song, Gangbing
2018-04-16
Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Matrix with Prescribed Eigenvectors
ERIC Educational Resources Information Center
Ahmad, Faiz
2011-01-01
It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…
NASA Astrophysics Data System (ADS)
Wu, Binlin; Smith, Jason; Zhang, Lin; Gao, Xin; Alfano, Robert R.
2018-02-01
Worldwide breast cancer incidence has increased by more than twenty percent in the past decade. It is also known that in that time, mortality due to the affliction has increased by fourteen percent. Using optical-based diagnostic techniques, such as Raman spectroscopy, has been explored in order to increase diagnostic accuracy in a more objective way along with significantly decreasing diagnostic wait-times. In this study, Raman spectroscopy with 532-nm excitation was used in order to incite resonance effects to enhance Stokes Raman scattering from unique biomolecular vibrational modes. Seventy-two Raman spectra (41 cancerous, 31 normal) were collected from nine breast tissue samples by performing a ten-spectra average using a 500-ms acquisition time at each acquisition location. The raw spectral data was subsequently prepared for analysis with background correction and normalization. The spectral data in the Raman Shift range of 750- 2000 cm-1 was used for analysis since the detector has highest sensitivity around in this range. The matrix decomposition technique nonnegative matrix factorization (NMF) was then performed on this processed data. The resulting leave-oneout cross-validation using two selective feature components resulted in sensitivity, specificity and accuracy of 92.6%, 100% and 96.0% respectively. The performance of NMF was also compared to that using principal component analysis (PCA), and NMF was shown be to be superior to PCA in this study. This study shows that coupling the resonance Raman spectroscopy technique with subsequent NMF decomposition method shows potential for high characterization accuracy in breast cancer detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Electronics and Algorithms for HOM Based Beam Diagnostics
NASA Astrophysics Data System (ADS)
Frisch, Josef; Baboi, Nicoleta; Eddy, Nathan; Nagaitsev, Sergei; Hensler, Olaf; McCormick, Douglas; May, Justin; Molloy, Stephen; Napoly, Olivier; Paparella, Rita; Petrosyan, Lyudvig; Ross, Marc; Simon, Claire; Smith, Tonee
2006-11-01
The signals from the Higher Order Mode (HOM) ports on superconducting cavities can be used as beam position monitors and to do survey structure alignment. A HOM-based diagnostic system has been installed to instrument both couplers on each of the 40 cryogenic accelerating structures in the DESY TTF2 Linac. The electronics uses a single stage down conversion from the 1.7 GHz HOM spectral line to a 20MHz IF which has been digitized. The electronics is based on low cost surface mount components suitable for large scale production. The analysis of the HOM data is based on Singular Value Decomposition. The response of the OM modes is calibrated using conventional BPMs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu
2014-05-15
Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less
NASA Astrophysics Data System (ADS)
Wakamatsu, M.; Kitadono, Y.; Zhang, P.-M.
2018-05-01
One intriguing issue in the nucleon spin decomposition problem is the existence of two types of decompositions, which are representably characterized by two different orbital angular momenta (OAMs) of quarks. The one is the mechanical OAM, while the other is the so-called gauge-invariant canonical (g.i.c.) OAM, the concept of which was introduced by Chen et al. An especially delicate quantity is the g.i.c. OAM, which must be distinguished from the ordinary (gauge-variant) canonical OAM. We find that, owing to its analytically solvable nature, the famous Landau problem offers an ideal tool to understand the difference and the physical meaning of the above three OAMs, i.e. the standard canonical OAM, g.i.c. OAM, and the mechanical OAM. We analyze these three OAMs in two different formulations of the Landau problem, first in the standard (gauge-fixed) formulation and second in the gauge-invariant (but path-dependent) formulation of DeWitt. Especially interesting is the latter formalism. It is shown that the choice of path has an intimate connection with the choice of gauge, but they are not necessarily equivalent. Then, we answer the question about what is the consequence of a particular choice of path in DeWitt's formalism. This analysis also clarifies the implication of the gauge symmetry hidden in the concept of g.i.c. OAM. Finally, we show that the finding above offers a clear understanding about the uniqueness or non-uniqueness problem of the nucleon spin decomposition, which arises from the arbitrariness in the definition of the so-called physical component of the gauge field.
Image quality enhancement for skin cancer optical diagnostics
NASA Astrophysics Data System (ADS)
Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey
2017-12-01
The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.
Training for Retrieval of Knowledge under Stress through Algorithmic Decomposition
1986-10-01
Light Bulb and Dyslexia problems used by Lichtenstein & MacGregor (1985). The problems are presented in Appendix D. All aspects of the problems were...this bulb as defective. What is the orobabilitv that this bulb is really defective? The Dyslaxia Problem Dyslexia iu a disorder characterized by an...Department of the Army * Calspan Corporation Technical review by Steve Kronheim and will be available only through DTIC or other reference service& such as
Diagnostic System for Decomposition Studies of Energetic Materials
2017-10-03
transition states and reaction pathways are sought. The overall objective for these combined experimental studies and quantum mechanics investigations...peak-to-peak 1 min: 50,000:1 ( 8.6×10-6 AU noise) peak-to-peak Interferometer UltraScan linear air bearing scanner with True -Alignment Aperture... True 24 bit dynamic range for all scan velocities, dual channel data acquisition Validation Internal validation unit, 6 positions, certified
Photo diagnosis of early pre cancer (LSIL) in genital tissue
NASA Astrophysics Data System (ADS)
Vaitkuviene, A.; Andersen-Engels, S.; Auksorius, E.; Bendsoe, N.; Gavriushin, V.; Gustafsson, U.; Oyama, J.; Palsson, S.; Soto Thompson, M.; Stenram, U.; Svanberg, K.; Viliunas, V.; De Weert, M. J.
2005-11-01
Permanent infections recognized as oncogenic factor. STD is common concomitant diseases in early precancerous genital tract lesions. Simple optical detection of early regressive pre cancer in cervix is the aim of this study. Hereditary immunosupression most likely is risk factor for cervical cancer development. Light induced fluorescence point monitoring fitted to live cervical tissue diagnostics in 42 patients. Human papilloma virus DNR in cervix tested by means of Hybrid Capture II method. Ultraviolet (337 nm) laser excited fluorescence spectra in the live cervical tissue analyzed by Principal Component (PrC) regression method and spectra decomposition method. PCr method best discriminated pathology group "CIN I and inflammation"(AUC=75%) related to fluorescence emission in short wave region. Spectra decomposition method suggested a few possible fluorophores in a long wave region. Ultraviolet (398 nm) light excitation of live cervix proved sharp selective spectra intensity enhancement in region above 600nm for High-grade cervical lesion. Conclusion: PC analysis of UV (337 nm) light excitation fluorescence spectra gives opportunity to obtain local immunity and Low-grade cervical lesion related information. Addition of shorter and longer wavelengths is promising for multi wave LIF point monitoring method progress in cervical pre-cancer diagnostics and utility for cancer prevention especially in developing countries.
Parallelization of PANDA discrete ordinates code using spatial decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.
2006-07-01
We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition
Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac
2013-01-01
Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brezov, D. S.; Mladenova, C. D.; Mladenov, I. M., E-mail: mladenov@bio21.bas.bg
In this paper we obtain the Lie derivatives of the scalar parameters in the generalized Euler decomposition with respect to arbitrary axes under left and right deck transformations. This problem can be directly related to the representation of the angular momentum in quantum mechanics. As a particular example, we calculate the angular momentum and the corresponding quantum hamiltonian in the standard Euler and Bryan representations. Similarly, in the hyperbolic case, the Laplace-Beltrami operator is retrieved for the Iwasawa decomposition. The case of two axes is considered as well.
NASA Astrophysics Data System (ADS)
Daftardar-Gejji, Varsha; Jafari, Hossein
2005-01-01
Adomian decomposition method has been employed to obtain solutions of a system of fractional differential equations. Convergence of the method has been discussed with some illustrative examples. In particular, for the initial value problem: where A=[aij] is a real square matrix, the solution turns out to be , where E([alpha]1,...,[alpha]n),1 denotes multivariate Mittag-Leffler function defined for matrix arguments and Ai is the matrix having ith row as [ai1...ain], and all other entries are zero. Fractional oscillation and Bagley-Torvik equations are solved as illustrative examples.
Study of Track Irregularity Time Series Calibration and Variation Pattern at Unit Section
Jia, Chaolong; Wei, Lili; Wang, Hanning; Yang, Jiulin
2014-01-01
Focusing on problems existing in track irregularity time series data quality, this paper first presents abnormal data identification, data offset correction algorithm, local outlier data identification, and noise cancellation algorithms. And then proposes track irregularity time series decomposition and reconstruction through the wavelet decomposition and reconstruction approach. Finally, the patterns and features of track irregularity standard deviation data sequence in unit sections are studied, and the changing trend of track irregularity time series is discovered and described. PMID:25435869
Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods
Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar
2018-01-01
Background: Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. Methods: The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. Results: The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. Conclusion: This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. PMID:29325403
Resolution of singularities for multi-loop integrals
NASA Astrophysics Data System (ADS)
Bogner, Christian; Weinzierl, Stefan
2008-04-01
We report on a program for the numerical evaluation of divergent multi-loop integrals. The program is based on iterated sector decomposition. We improve the original algorithm of Binoth and Heinrich such that the program is guaranteed to terminate. The program can be used to compute numerically the Laurent expansion of divergent multi-loop integrals regulated by dimensional regularisation. The symbolic and the numerical steps of the algorithm are combined into one program. Program summaryProgram title: sector_decomposition Catalogue identifier: AEAG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 47 506 No. of bytes in distributed program, including test data, etc.: 328 485 Distribution format: tar.gz Programming language: C++ Computer: all Operating system: Unix RAM: Depending on the complexity of the problem Classification: 4.4 External routines: GiNaC, available from http://www.ginac.de, GNU scientific library, available from http://www.gnu.org/software/gsl Nature of problem: Computation of divergent multi-loop integrals. Solution method: Sector decomposition. Restrictions: Only limited by the available memory and CPU time. Running time: Depending on the complexity of the problem.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines
Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara; ...
2018-02-20
In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less
Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara
In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less
The Development from Effortful to Automatic Processing in Mathematical Cognition.
ERIC Educational Resources Information Center
Kaye, Daniel B.; And Others
This investigation capitalizes upon the information processing models that depend upon measurement of latency of response to a mathematical problem and the decomposition of reaction time (RT). Simple two term addition problems were presented with possible solutions for true-false verification, and accuracy and RT to response were recorded. Total…
The Rigid Orthogonal Procrustes Rotation Problem
ERIC Educational Resources Information Center
ten Berge, Jos M. F.
2006-01-01
The problem of rotating a matrix orthogonally to a best least squares fit with another matrix of the same order has a closed-form solution based on a singular value decomposition. The optimal rotation matrix is not necessarily rigid, but may also involve a reflection. In some applications, only rigid rotations are permitted. Gower (1976) has…
Ferreira, M Teresa; Cunha, Eugénia
2013-03-10
Post mortem interval estimation is crucial in forensic sciences for both positive identification and reconstruction of perimortem events. However, reliable dating of skeletonized remains poses a scientific challenge since human remains decomposition involves a set of complex and highly variable processes. Many of the difficulties in determining post mortem interval and/or the permanence of a body in a specific environment relates with the lack of systematic observations and research in human body decomposition modalities in different environments. In March 2006, in order to solve a problem of misidentification, a team of the South Branch of Portuguese National Institute of Legal Medicine carried out the exhumation of 25 identified individuals buried for almost five years in the same cemetery plot. Even though all individuals shared similar post mortem intervals, they presented different stages of decomposition. In order to analyze the post mortem factors associated with the different stages of decomposition displayed by the 25 exhumed individuals, the stages of decomposition were scored. Information regarding age at death and sex of the individuals were gathered and recorded as well as data in the cause of death and grave and coffin characteristics. Although the observed distinct decay stages may be explained by the burial conditions, namely by the micro taphonomic environments, individual endogenous factors also play an important role on differential decomposition as witnessed by the present case. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sierra, Carlos A.; Trumbore, Susan E.; Davidson, Eric A.; Vicca, Sara; Janssens, I.
2015-03-01
The sensitivity of soil organic matter decomposition to global environmental change is a topic of prominent relevance for the global carbon cycle. Decomposition depends on multiple factors that are being altered simultaneously as a result of global environmental change; therefore, it is important to study the sensitivity of the rates of soil organic matter decomposition with respect to multiple and interacting drivers. In this manuscript, we present an analysis of the potential response of decomposition rates to simultaneous changes in temperature and moisture. To address this problem, we first present a theoretical framework to study the sensitivity of soil organic matter decomposition when multiple driving factors change simultaneously. We then apply this framework to models and data at different levels of abstraction: (1) to a mechanistic model that addresses the limitation of enzyme activity by simultaneous effects of temperature and soil water content, the latter controlling substrate supply and oxygen concentration for microbial activity; (2) to different mathematical functions used to represent temperature and moisture effects on decomposition in biogeochemical models. To contrast model predictions at these two levels of organization, we compiled different data sets of observed responses in field and laboratory studies. Then we applied our conceptual framework to: (3) observations of heterotrophic respiration at the ecosystem level; (4) laboratory experiments looking at the response of heterotrophic respiration to independent changes in moisture and temperature; and (5) ecosystem-level experiments manipulating soil temperature and water content simultaneously.
NASA Astrophysics Data System (ADS)
Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.
2016-11-01
This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.
Optical systolic solutions of linear algebraic equations
NASA Technical Reports Server (NTRS)
Neuman, C. P.; Casasent, D.
1984-01-01
The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-01-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-08-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
Diagnostics vehicle’s condition using obd-ii and raspberry pi technology: study literature
NASA Astrophysics Data System (ADS)
Moniaga, J. V.; Manalu, S. R.; Hadipurnawan, D. A.; Sahidi, F.
2018-03-01
Transportation accident rate are still being a major challenge in many countries. There are many factors that could be cause transportation accident, especially in vehicle’s internal system problem. To overcome this problem, OBD-II technology has been created to diagnostics vehicle’s condition. OBD-II scanner plugged to OBD-II port or usually called Data Link Connector (DLC), and after that it sends the diagnostics to Raspberry Pi. Compared from another microcontrollers, Arduino, Raspberry Pi are chosen because it sustains the application to receive real-time diagnostics, process the diagnostics and send command to automobiles at the same time, rather than Arduino that must wait for another process finished to run another process. Outcome from this application is to enable automobile’s user to diagnostics their own vehicles. If there is found something unusual or a problem, the application can told the problem to user, so they could know what to fix before they use their vehicle safely.
NASA Astrophysics Data System (ADS)
Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu
2016-05-01
The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
Layout decomposition of self-aligned double patterning for 2D random logic patterning
NASA Astrophysics Data System (ADS)
Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.
2011-04-01
Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.
On the reliable and flexible solution of practical subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm for solving subset regression problems is described. The algorithm performs a QR decomposition with a new column-pivoting strategy, which permits subset selection directly from the originally defined regression parameters. This, in combination with a number of extensions of the new technique, makes the method a very flexible tool for analyzing subset regression problems in which the parameters have a physical meaning.
ICASE Computer Science Program
NASA Technical Reports Server (NTRS)
1985-01-01
The Institute for Computer Applications in Science and Engineering computer science program is discussed in outline form. Information is given on such topics as problem decomposition, algorithm development, programming languages, and parallel architectures.
A Tensor-Train accelerated solver for integral equations in complex geometries
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Rahimian, Abtin; Zorin, Denis
2017-04-01
We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest O (log N) and once the inverse is computed, it can be applied in O (Nlog N) . We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Dey, Nilanjan; Bose, Soumyo; Das, Achintya; Chaudhuri, Sheli Sinha; Saba, Luca; Shafique, Shoaib; Nicolaides, Andrew; Suri, Jasjit S
2016-04-01
Embedding of diagnostic and health care information requires secure encryption and watermarking. This research paper presents a comprehensive study for the behavior of some well established watermarking algorithms in frequency domain for the preservation of stroke-based diagnostic parameters. Two different sets of watermarking algorithms namely: two correlation-based (binary logo hiding) and two singular value decomposition (SVD)-based (gray logo hiding) watermarking algorithms are used for embedding ownership logo. The diagnostic parameters in atherosclerotic plaque ultrasound video are namely: (a) bulb identification and recognition which consists of identifying the bulb edge points in far and near carotid walls; (b) carotid bulb diameter; and (c) carotid lumen thickness all along the carotid artery. The tested data set consists of carotid atherosclerotic movies taken under IRB protocol from University of Indiana Hospital, USA-AtheroPoint™ (Roseville, CA, USA) joint pilot study. ROC (receiver operating characteristic) analysis was performed on the bulb detection process that showed an accuracy and sensitivity of 100 % each, respectively. The diagnostic preservation (DPsystem) for SVD-based approach was above 99 % with PSNR (Peak signal-to-noise ratio) above 41, ensuring the retention of diagnostic parameter devalorization as an effect of watermarking. Thus, the fully automated proposed system proved to be an efficient method for watermarking the atherosclerotic ultrasound video for stroke application.
Computational Investigation of Combustion Instabilities in a Laboratory-Scale LDI Gas Turbine Engine
2013-06-01
combustor by the insertion of a slotted inlet and an exit nozzle , whereas the reduced geometry is acoustically open. Table 2 Summary of Cases Considered... nozzle located at the right-end surface, an outlet condition is imposed by a characteristic back pressure condition. The fuel spray is injected at the...Computational Mesh visualized around the fuel nozzle and swirler III. Decomposition Methods For Combustion Dynamics Diagnostics To understand the
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Resolving Some Paradoxes in the Thermal Decomposition Mechanism of Acetaldehyde
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivaramakrishnan, Raghu; Michael, Joe V.; Harding, Lawrence B.
2015-07-16
The mechanism for the thermal decomposition of acetaldehyde has been revisited with an analysis of literature kinetics experiments using theoretical kinetics. The present modeling study was motivated by recent observations, with very sensitive diagnostics, of some unexpected products in high temperature micro-tubular reactor experiments on the thermal decomposition of CH3CHO and its deuterated analogs, CH3CDO, CD3CHO, and CD3CDO. The observations of these products prompted the authors of these studies to suggest that the enol tautomer, CH2CHOH (vinyl alcohol), is a primary intermediate in the thermal decomposition of acetaldehyde. The present modeling efforts on acetaldehyde decomposition incorporate a master equation re-analysismore » of the CH3CHO potential energy surface (PES). The lowest energy process on this PES is an isomerization of CH3CHO to CH2CHOH. However, the subsequent product channels for CH2CHOH are substantially higher in energy, and the only unimolecular process that can be thermally accessed is a re-isomerization to CH3CHO. The incorporation of these new theoretical kinetics predictions into models for selected literature experiments on CH3CHO thermal decomposition confirms our earlier experiment and theory based conclusions that the dominant decomposition process in CH3CHO at high temperatures is C-C bond fission with a minor contribution (~10-20%) from the roaming mechanism to form CH4 and CO. The present modeling efforts also incorporate a master-equation analysis of the H + CH2CHOH potential energy surface. This bimolecular reaction is the primary mechanism for removal of CH2CHOH, which can accumulate to minor amounts at high temperatures, T > 1000 K, in most lab-scale experiments that use large initial concentrations of CH3CHO. Our modeling efforts indicate that the observation of ketene, water and acetylene in the recent micro-tubular experiments are primarily due to bimolecular reactions of CH3CHO and CH2CHOH with H-atoms, and have no bearing on the unimolecular decomposition mechanism of CH3CHO. The present simulations also indicate that experiments using these micro-tubular reactors when interpreted with the aid of high-level theoretical calculations and kinetics modeling can offer insights into the chemistry of elusive intermediates in high temperature pyrolysis of organic molecules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.
2016-06-01
In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less
Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Zheng, Yang; Chen, Xihao; Zhu, Rui
2017-07-01
Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.
Bayesian Hierarchical Grouping: perceptual grouping as mixture estimation
Froyen, Vicky; Feldman, Jacob; Singh, Manish
2015-01-01
We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian Hierarchical Grouping (BHG). In BHG we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are “owned” by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz. PMID:26322548
Program Helps Decompose Complicated Design Problems
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.
1993-01-01
Time saved by intelligent decomposition into smaller, interrelated problems. DeMAID is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Displays modules in N x N matrix format. Requires investment of time to generate and refine list of modules for input, it saves considerable amount of money and time in total design process, particularly new design problems in which ordering of modules has not been defined. Program also implemented to examine assembly-line process or ordering of tasks and milestones.
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-08-28
Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
Class and Home Problems. Modeling an Explosion: The Devil Is in the Details
ERIC Educational Resources Information Center
Hart, Peter W.; Rudie, Alan W.
2011-01-01
Within the past 15 years, three North American pulp mills experienced catastrophic equipment failures while using 50 wt% hydrogen peroxide. In two cases, explosions occurred when normal pulp flow was interrupted due to other process problems. To understand the accidents, a kinetic model of alkali-catalyzed decomposition of peroxide was developed.…
Characterising laser beams with liquid crystal displays
NASA Astrophysics Data System (ADS)
Dudley, Angela; Naidoo, Darryl; Forbes, Andrew
2016-02-01
We show how one can determine the various properties of light, from the modal content of laser beams to decoding the information stored in optical fields carrying orbital angular momentum, by performing a modal decomposition. Although the modal decomposition of light has been known for a long time, applied mostly to pattern recognition, we illustrate how this technique can be implemented with the use of liquid-crystal displays. We show experimentally how liquid crystal displays can be used to infer the intensity, phase, wavefront, Poynting vector, and orbital angular momentum density of unknown optical fields. This measurement technique makes use of a single spatial light modulator (liquid crystal display), a Fourier transforming lens and detector (CCD or photo-diode). Such a diagnostic tool is extremely relevant to the real-time analysis of solid-state and fibre laser systems as well as mode division multiplexing as an emerging technology in optical communication.
NASA Astrophysics Data System (ADS)
Volegov, P. L.; Danly, C. R.; Fittinghoff, D.; Geppert-Kleinrath, V.; Grim, G.; Merrill, F. E.; Wilde, C. H.
2017-11-01
Neutron, gamma-ray, and x-ray imaging are important diagnostic tools at the National Ignition Facility (NIF) for measuring the two-dimensional (2D) size and shape of the neutron producing region, for probing the remaining ablator and measuring the extent of the DT plasmas during the stagnation phase of Inertial Confinement Fusion implosions. Due to the difficulty and expense of building these imagers, at most only a few two-dimensional projections images will be available to reconstruct the three-dimensional (3D) sources. In this paper, we present a technique that has been developed for the 3D reconstruction of neutron, gamma-ray, and x-ray sources from a minimal number of 2D projections using spherical harmonics decomposition. We present the detailed algorithms used for this characterization and the results of reconstructed sources from experimental neutron and x-ray data collected at OMEGA and NIF.
Algorithm For Solution Of Subset-Regression Problems
NASA Technical Reports Server (NTRS)
Verhaegen, Michel
1991-01-01
Reliable and flexible algorithm for solution of subset-regression problem performs QR decomposition with new column-pivoting strategy, enables selection of subset directly from originally defined regression parameters. This feature, in combination with number of extensions, makes algorithm very flexible for use in analysis of subset-regression problems in which parameters have physical meanings. Also extended to enable joint processing of columns contaminated by noise with those free of noise, without using scaling techniques.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Computer Code for Transportation Network Design and Analysis
DOT National Transportation Integrated Search
1977-01-01
This document describes the results of research into the application of the mathematical programming technique of decomposition to practical transportation network problems. A computer code called Catnap (for Control Analysis Transportation Network A...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pautz, Shawn D.; Bailey, Teresa S.
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem
NASA Technical Reports Server (NTRS)
Deissler, Robert G.
1992-01-01
Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.
Pautz, Shawn D.; Bailey, Teresa S.
2016-11-29
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Definition of a parametric form of nonsingular Mueller matrices.
Devlaminck, Vincent; Terrier, Patrick
2008-11-01
The goal of this paper is to propose a mathematical framework to define and analyze a general parametric form of an arbitrary nonsingular Mueller matrix. Starting from previous results about nondepolarizing matrices, we generalize the method to any nonsingular Mueller matrix. We address this problem in a six-dimensional space in order to introduce a transformation group with the same number of degrees of freedom and explain why subsets of O(5,1), the orthogonal group associated with six-dimensional Minkowski space, is a physically admissible solution to this question. Generators of this group are used to define possible expressions of an arbitrary nonsingular Mueller matrix. Ultimately, the problem of decomposition of these matrices is addressed, and we point out that the "reverse" and "forward" decomposition concepts recently introduced may be inferred from the formalism we propose.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)
2013-01-01
Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.
A stable and accurate partitioned algorithm for conjugate heat transfer
NASA Astrophysics Data System (ADS)
Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
2017-09-01
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.
A stable and accurate partitioned algorithm for conjugate heat transfer
Meng, F.; Banks, J. W.; Henshaw, W. D.; ...
2017-04-25
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
2016-08-10
thermal decomposition and mechanical damage of energetics. The program for the meeting included nine oral presentation sessions. Discussion leaders...USA) 7:30 pm - 7:35 pm Introduction by Discussion Leader 7:35 pm - 7:50 pm Vincent Baijot (Laboratory for Analysis and Architecture of Systems , CNRS...were synthesis of new materials, performance, advanced diagnostics, experimental techniques, theoretical approaches, and computational models for
2013-08-01
transformation models, such as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the...research project. References: 1. Bookstein FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern...Rohr K, Stiehl HS, Sprengel R, Buzug TM, Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions
2013-08-01
as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the use of B- spline ...FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence...Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions on Medical Imaging. 2001;20(6):526-34
Xu, Andrew Wei
2010-09-01
In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .
Reactive power planning under high penetration of wind energy using Benders decomposition
Xu, Yan; Wei, Yanli; Fang, Xin; ...
2015-11-05
This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less
Stathopoulos, Panagiotis; Papas, Serafim; Tsikaris, Vassilios
2006-03-01
Decomposition of the resin linkers during TFA cleavage of the peptides in the Fmoc strategy leads to alkylation of sensitive amino acids. The C-terminal amide alkylation, reported for the first time, is shown to be a major problem in peptide amides synthesized on the Rink amide resin. This side reaction occurs as a result of the Rink amide linker decomposition under TFA treatment of the peptide resin. The use of 1,3-dimethoxybenzene in a cleavage cocktail prevents almost quantitatively formation of C-terminal N-alkylated peptide amides. Oxidized by-product in the tested Cys- and Met-containing peptides were not observed, even if thiols were not used in the cleavage mixture. Copyright (c) 2005 European Peptide Society and John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Baggs, Rhoda
2007-01-01
In the early problem-solution era of software programming, functional decompositions were mainly used to design and implement software solutions. In functional decompositions, functions and data are introduced as two separate entities during the design phase, and are followed as such in the implementation phase. Functional decompositions make use of refactoring through optimizing the algorithms, grouping similar functionalities into common reusable functions, and using abstract representations of data where possible; all these are done during the implementation phase. This paper advocates the usage of object-oriented methodologies and design patterns as the centerpieces of refactoring software solutions. Refactoring software is a method of changing software design while explicitly preserving its external functionalities. The combined usage of object-oriented methodologies and design patterns to refactor should also benefit the overall software life cycle cost with improved software.
Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition.
Liu, Yuanyuan; Shang, Fanhua; Fan, Wei; Cheng, James; Cheng, Hong
2016-12-01
Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor and its core tensor is induced. Second, the trace norm of the core tensor is used to replace that of the whole tensor, which leads to two much smaller scale matrix TNM problems. Finally, an efficient alternating direction augmented Lagrangian method is developed to solve our problems. Our CTNM formulation needs only O((R N +NRI)log(√{I N })) observations to reliably recover an N th-order I×I×…×I tensor of n -rank (r,r,…,r) , compared with O(rI N-1 ) observations required by those tensor TNM methods ( I > R ≥ r ). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.
On some Aitken-like acceleration of the Schwarz method
NASA Astrophysics Data System (ADS)
Garbey, M.; Tromeur-Dervout, D.
2002-12-01
In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.
Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods.
Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar
2017-04-22
Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan
2013-01-01
Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kempka, S.N.; Strickland, J.H.; Glass, M.W.
1995-04-01
formulation to satisfy velocity boundary conditions for the vorticity form of the incompressible, viscous fluid momentum equations is presented. The tangential and normal components of the velocity boundary condition are satisfied simultaneously by creating vorticity adjacent to boundaries. The newly created vorticity is determined using a kinematical formulation which is a generalization of Helmholtz` decomposition of a vector field. Though it has not been generally recognized, these formulations resolve the over-specification issue associated with creating voracity to satisfy velocity boundary conditions. The generalized decomposition has not been widely used, apparently due to a lack of a useful physical interpretation. Anmore » analysis is presented which shows that the generalized decomposition has a relatively simple physical interpretation which facilitates its numerical implementation. The implementation of the generalized decomposition is discussed in detail. As an example the flow in a two-dimensional lid-driven cavity is simulated. The solution technique is based on a Lagrangian transport algorithm in the hydrocode ALEGRA. ALEGRA`s Lagrangian transport algorithm has been modified to solve the vorticity transport equation and the generalized decomposition, thus providing a new, accurate method to simulate incompressible flows. This numerical implementation and the new boundary condition formulation allow vorticity-based formulations to be used in a wider range of engineering problems.« less
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
Di Somma, Ilaria; Pollio, Antonino; Pinto, Gabriele; De Falco, Maria; Pizzo, Elio; Andreozzi, Roberto
2010-04-15
The knowledge of the substances which form when a molecule undergoes chemical reactions under unusual conditions is required by European legislation to evaluate the risks associated with an industrial chemical process. A thermal decomposition is often the result of a loss of control of the process which leads to the formation of many substances in some cases not easily predictable. The evaluation of the change of an overall toxicity passing from the parent compound to the mixture of its thermal decomposition products has been already proposed as a practical approach to this problem when preliminary indications about the temperature range in which the molecule decomposes are available. A new procedure is proposed in this work for the obtainment of the mixtures of thermal decomposition products also when there is no previous information about the thermal behaviour of investigated molecules. A scanning calorimetric run that is aimed to identify the onset temperature of the decomposition process is coupled to an isoperibolic one in order to obtain and collect the products. An algal strain is adopted for toxicological assessments of chemical compounds and mixtures. An extension of toxicological investigations to human cells is also attempted. 2009 Elsevier B.V. All rights reserved.
Matrix decompositions of two-dimensional nuclear magnetic resonance spectra.
Havel, T F; Najfeld, I; Yang, J X
1994-08-16
Two-dimensional NMR spectra are rectangular arrays of real numbers, which are commonly regarded as digitized images to be analyzed visually. If one treats them instead as mathematical matrices, linear algebra techniques can also be used to extract valuable information from them. This matrix approach is greatly facilitated by means of a physically significant decomposition of these spectra into a product of matrices--namely, S = PAPT. Here, P denotes a matrix whose columns contain the digitized contours of each individual peak or multiple in the one-dimensional spectrum, PT is its transpose, and A is an interaction matrix specific to the experiment in question. The practical applications of this decomposition are considered in detail for two important types of two-dimensional NMR spectra, double quantum-filtered correlated spectroscopy and nuclear Overhauser effect spectroscopy, both in the weak-coupling approximation. The elements of A are the signed intensities of the cross-peaks in a double quantum-filtered correlated spectrum, or the integrated cross-peak intensities in the case of a nuclear Overhauser effect spectrum. This decomposition not only permits these spectra to be efficiently simulated but also permits the corresponding inverse problems to be given an elegant mathematical formulation to which standard numerical methods are applicable. Finally, the extension of this decomposition to the case of strong coupling is given.
Matrix decompositions of two-dimensional nuclear magnetic resonance spectra.
Havel, T F; Najfeld, I; Yang, J X
1994-01-01
Two-dimensional NMR spectra are rectangular arrays of real numbers, which are commonly regarded as digitized images to be analyzed visually. If one treats them instead as mathematical matrices, linear algebra techniques can also be used to extract valuable information from them. This matrix approach is greatly facilitated by means of a physically significant decomposition of these spectra into a product of matrices--namely, S = PAPT. Here, P denotes a matrix whose columns contain the digitized contours of each individual peak or multiple in the one-dimensional spectrum, PT is its transpose, and A is an interaction matrix specific to the experiment in question. The practical applications of this decomposition are considered in detail for two important types of two-dimensional NMR spectra, double quantum-filtered correlated spectroscopy and nuclear Overhauser effect spectroscopy, both in the weak-coupling approximation. The elements of A are the signed intensities of the cross-peaks in a double quantum-filtered correlated spectrum, or the integrated cross-peak intensities in the case of a nuclear Overhauser effect spectrum. This decomposition not only permits these spectra to be efficiently simulated but also permits the corresponding inverse problems to be given an elegant mathematical formulation to which standard numerical methods are applicable. Finally, the extension of this decomposition to the case of strong coupling is given. PMID:8058742
NASA Astrophysics Data System (ADS)
Roy, Satadru
Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.
Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudin, Pablo, E-mail: baudin.pablo@gmail.com; qLEAP – Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C; Marín, José Sánchez
2014-03-14
A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well asmore » the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.« less
Gibbsian Stationary Non-equilibrium States
NASA Astrophysics Data System (ADS)
De Carlo, Leonardo; Gabrielli, Davide
2017-09-01
We study the structure of stationary non-equilibrium states for interacting particle systems from a microscopic viewpoint. In particular we discuss two different discrete geometric constructions. We apply both of them to determine non reversible transition rates corresponding to a fixed invariant measure. The first one uses the equivalence of this problem with the construction of divergence free flows on the transition graph. Since divergence free flows are characterized by cyclic decompositions we can generate families of models from elementary cycles on the configuration space. The second construction is a functional discrete Hodge decomposition for translational covariant discrete vector fields. According to this, for example, the instantaneous current of any interacting particle system on a finite torus can be canonically decomposed in a gradient part, a circulation term and an harmonic component. All the three components are associated with functions on the configuration space. This decomposition is unique and constructive. The stationary condition can be interpreted as an orthogonality condition with respect to an harmonic discrete vector field and we use this decomposition to construct models having a fixed invariant measure.
Triple/quadruple patterning layout decomposition via linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2017-04-01
As the feature size of the semiconductor technology scales down to 10 nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies, such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL), and directed self-assembly. Due to the delay of EUVL and EBL, triple and even quadruple patterning is considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, whereas it is forbidden for contact and via layers. We focus on the application of layout decomposition where stitching is not allowed, such as for contact and via layers. We propose a linear programming (LP) and iterative rounding solving technique to reduce the number of nonintegers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
Translation and integration of CCC nursing diagnoses into ICNP.
Matney, Susan A; DaDamio, Rebecca; Couderc, Carmela; Dlugos, Mary; Evans, Jonathan; Gianonne, Gay; Haskell, Robert; Hardiker, Nicholas; Coenen, Amy; Saba, Virginia K
2008-01-01
The purpose of this study was to translate and integrate nursing diagnosis concepts from the Clinical Care Classification (CCC) System Version 2.0 to DiagnosticPhenomenon or nursing diagnostic statements in the International Classification for Nursing Practice (ICNP) Version 1.0. Source concepts for CCC were mapped by the project team, where possible, to pre-coordinated ICNP terms. The manual decomposition of source concepts according to the ICNP 7-Axis Model served to validate the mappings. A total of 62% of the CCC Nursing Diagnoses were a pre-coordinated match to an ICNP concept, 35% were a post-coordinated match and only 3% had no match. During the mapping process, missing CCC concepts were submitted to the ICNP Programme, with a recommendation for inclusion in future releases.
An informal paper on large-scale dynamic systems
NASA Technical Reports Server (NTRS)
Ho, Y. C.
1975-01-01
Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.
Decomposition Technique for Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)
2014-01-01
The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.
NASA Astrophysics Data System (ADS)
Rahman, P. A.
2018-05-01
This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.
NASA Astrophysics Data System (ADS)
Gao, Shibo; Cheng, Yongmei; Song, Chunhua
2013-09-01
The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.
Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu
2015-06-18
The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
NASA Astrophysics Data System (ADS)
Qing, Zhou; Weili, Jiao; Tengfei, Long
2014-03-01
The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.
Grau, L; Laulagnet, B
2015-05-01
An analytical approach is investigated to model ground-plate interaction based on modal decomposition and the two-dimensional Fourier transform. A finite rectangular plate subjected to flexural vibration is coupled with the ground and modeled with the Kirchhoff hypothesis. A Navier equation represents the stratified ground, assumed infinite in the x- and y-directions and free at the top surface. To obtain an analytical solution, modal decomposition is applied to the structure and a Fourier Transform is applied to the ground. The result is a new tool for analyzing ground-plate interaction to resolve this problem: ground cross-modal impedance. It allows quantifying the added-stiffness, added-mass, and added-damping from the ground to the structure. Similarity with the parallel acoustic problem is highlighted. A comparison between the theory and the experiment shows good matching. Finally, specific cases are investigated, notably the influence of layer depth on plate vibration.
Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm
NASA Astrophysics Data System (ADS)
Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam
2017-04-01
The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.
Feeding and Eating Disorders: Ingestive Problems of Infancy, Childhood, and Adolescence.
ERIC Educational Resources Information Center
Kerwin, MaryLouise E.; Berkowitz, Robert I.
1996-01-01
The fourth edition of the "Diagnostic Statistical Manual of Mental Disorders" (DSM) recognizes that feeding problems of infants and children are not typically the same as eating problems of adolescents, thus the addition of a broad diagnostic category, "Feeding and Eating Disorders of Infancy or Early Childhood." Subtypes are…
SVD analysis of Aura TES spectral residuals
NASA Technical Reports Server (NTRS)
Beer, Reinhard; Kulawik, Susan S.; Rodgers, Clive D.; Bowman, Kevin W.
2005-01-01
Singular Value Decomposition (SVD) analysis is both a powerful diagnostic tool and an effective method of noise filtering. We present the results of an SVD analysis of an ensemble of spectral residuals acquired in September 2004 from a 16-orbit Aura Tropospheric Emission Spectrometer (TES) Global Survey and compare them to alternative methods such as zonal averages. In particular, the technique highlights issues such as the orbital variation of instrument response and incompletely modeled effects of surface emissivity and atmospheric composition.
Approximation algorithms for a genetic diagnostics problem.
Kosaraju, S R; Schäffer, A A; Biesecker, L G
1998-01-01
We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.
Formal and heuristic system decomposition methods in multidisciplinary synthesis. Ph.D. Thesis, 1991
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.
1991-01-01
The multidisciplinary interactions which exist in large scale engineering design problems provide a unique set of difficulties. These difficulties are associated primarily with unwieldy numbers of design variables and constraints, and with the interdependencies of the discipline analysis modules. Such obstacles require design techniques which account for the inherent disciplinary couplings in the analyses and optimizations. The objective of this work was to develop an efficient holistic design synthesis methodology that takes advantage of the synergistic nature of integrated design. A general decomposition approach for optimization of large engineering systems is presented. The method is particularly applicable for multidisciplinary design problems which are characterized by closely coupled interactions among discipline analyses. The advantage of subsystem modularity allows for implementation of specialized methods for analysis and optimization, computational efficiency, and the ability to incorporate human intervention and decision making in the form of an expert systems capability. The resulting approach is not a method applicable to only a specific situation, but rather, a methodology which can be used for a large class of engineering design problems in which the system is non-hierarchic in nature.
Vectorial finite elements for solving the radiative transfer equation
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.
2018-06-01
The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, W.D.; Keyes, D.E.
1988-03-01
The authors discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. They show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing architectures.
Optimal pattern synthesis for speech recognition based on principal component analysis
NASA Astrophysics Data System (ADS)
Korsun, O. N.; Poliyev, A. V.
2018-02-01
The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.
Partial information decomposition as a spatiotemporal filter.
Flecker, Benjamin; Alford, Wesley; Beggs, John M; Williams, Paul L; Beer, Randall D
2011-09-01
Understanding the mechanisms of distributed computation in cellular automata requires techniques for characterizing the emergent structures that underlie information processing in such systems. Recently, techniques from information theory have been brought to bear on this problem. Building on this work, we utilize the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information. We then propose a new set of filters and demonstrate that they more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.
Ink-jet printing of silver metallization for photovoltaics
NASA Technical Reports Server (NTRS)
Vest, R. W.
1986-01-01
The status of the ink-jet printing program at Purdue University is described. The drop-on-demand printing system was modified to use metallo-organic decomposition (MOD) inks. Also, an IBM AT computer was integrated into the ink-jet printer system to provide operational functions and contact pattern configuration. The integration of the ink-jet printing system, problems encountered, and solutions derived were described in detail. The status of ink-jet printing using a MOD ink was discussed. The ink contained silver neodecanate and bismuth 2-ethylhexanoate dissolved in toluene; the MOD ink decomposition products being 99 wt% AG, and 1 wt% Bi.
Dynamic Evaluation of Two Decades of CMAQ Simulations ...
This presentation focuses on the dynamic evaluation of the CMAQ model over the continental United States using multi-decadal simulations for the period from 1990 to 2010 to examine how well the changes in observed ozone air quality induced by variations in meteorology and/or emissions are simulated by the model. We applied spectral decomposition of the ozone time-series using the KZ filter to assess the variations in the strengths of synoptic (weather-induced variations) and baseline (long-term variation) forcings, embedded in the simulated and observed concentrations. The results reveal that CMAQ captured the year-to-year variability (more so in the later years than the earlier years) and the synoptic forcing in accordance with what the observations are showing. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
Program Helps Decompose Complex Design Systems
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Hall, Laura E.
1994-01-01
DeMAID (A Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Groups modular subsystems on basis of interactions among them. Saves considerable money and time in total design process, particularly in new design problem in which order of modules has not been defined. Available in two machine versions: Macintosh and Sun.
A Study of Strong Stability of Distributed Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Cataltepe, Tayfun
1989-01-01
The strong stability of distributed systems is studied and the problem of characterizing strongly stable semigroups of operators associated with distributed systems is addressed. Main emphasis is on contractive systems. Three different approaches to characterization of strongly stable contractive semigroups are developed. The first one is an operator theoretical approach. Using the theory of dilations, it is shown that every strongly stable contractive semigroup is related to the left shift semigroup on an L(exp 2) space. Then, a decomposition for the state space which identifies strongly stable and unstable states is introduced. Based on this decomposition, conditions for a contractive semigroup to be strongly stable are obtained. Finally, extensions of Lyapunov's equation for distributed parameter systems are investigated. Sufficient conditions for weak and strong stabilities of uniformly bounded semigroups are obtained by relaxing the equivalent norm condition on the right hand side of the Lyanupov equation. These characterizations are then applied to the problem of feedback stabilization. First, it is shown via the state space decomposition that under certain conditions a contractive system (A,B) can be strongly stabilized by the feedback -B(*). Then, application of the extensions of the Lyapunov equation results in sufficient conditions for weak, strong, and exponential stabilizations of contractive systems by the feedback -B(*). Finally, it is shown that for a contractive system, the first derivative of x with respect to time = Ax + Bu (where B is any linear bounded operator), there is a related linear quadratic regulator problem and a corresponding steady state Riccati equation which always has a bounded nonnegative solution.
Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection
NASA Astrophysics Data System (ADS)
Amiri, Ali; Fathy, Mahmood
2010-12-01
This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.
Violent societies: an application of orbital decomposition to the problem of human violence.
Spohn, M
2008-01-01
This study uses orbital decomposition to analyze the patterns of how governments lose their monopolies on violence, therefore allowing those societies to descend into violent states from which it is difficult to recover. The nonlinear progression by which the governing body loses its monopoly is based on the work of criminologist Lonnie Athens and applied from the individual to the societal scale. Four different kinds of societies are considered: Those where the governing body is both unwilling and unable to assert its monopoly on violence (former Yugoslavia); where it is unwilling (Peru); where it is unable (South Africa); and a smaller pocket of violent society within a larger, more stable one (Gujarat). In each instance, orbital decomposition turns up insights not apparent in the qualitative data or through linear statistical analysis, both about the nature of the descent into violence and about the progression itself.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition
NASA Astrophysics Data System (ADS)
Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale
2012-10-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
Matching multiple rigid domain decompositions of proteins
Flynn, Emily; Streinu, Ileana
2017-01-01
We describe efficient methods for consistently coloring and visualizing collections of rigid cluster decompositions obtained from variations of a protein structure, and lay the foundation for more complex setups that may involve different computational and experimental methods. The focus here is on three biological applications: the conceptually simpler problems of visualizing results of dilution and mutation analyses, and the more complex task of matching decompositions of multiple NMR models of the same protein. Implemented into the KINARI web server application, the improved visualization techniques give useful information about protein folding cores, help examining the effect of mutations on protein flexibility and function, and provide insights into the structural motions of PDB proteins solved with solution NMR. These tools have been developed with the goal of improving and validating rigidity analysis as a credible coarse-grained model capturing essential information about a protein’s slow motions near the native state. PMID:28141528
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
NASA Technical Reports Server (NTRS)
Morino, L.
1986-01-01
Using the decomposition for the infinite-space, the issue of the nonuniqueness of the Helmholtz decomposition for the problem of the three-dimensional unsteady incompressible flow around a body is considered. A representation for the velocity that is valid for both the fluid region and the region inside the boundary surface is employed, and the motion of the boundary is described as the limiting case of a sequence of impulsive accelerations. At each instant of velocity discontinuity, vorticity is shown to be generated by the boundary condition on the normal component of the velocity, for both inviscid and viscous flows. In viscous flows, the vorticity is shown to diffuse into the surroundings, and the no-slip conditions are automatically satisfied. A trailing edge condition must be satisfied for the solution to the Euler equations to be the limit of the solution of the Navier-Stokes equations.
Painful Shoulder in Swimmers: A Diagnostic Challenge.
ERIC Educational Resources Information Center
McMaster, William C.
1986-01-01
This article discusses the incidence, diagnosis, and treatment of painful shoulder in swimmers, including: regional problems that can cause shoulder pain; physical, clinical, and laboratory tests for diagnostic use; and approaches to management of the problem. (Author/CB)
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Model based approach to UXO imaging using the time domain electromagnetic method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavely, E.M.
1999-04-01
Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detectormore » modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.« less
Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping
2004-08-12
Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/
NASA Astrophysics Data System (ADS)
Lohmann, Timo
Electric sector models are powerful tools that guide policy makers and stakeholders. Long-term power generation expansion planning models are a prominent example and determine a capacity expansion for an existing power system over a long planning horizon. With the changes in the power industry away from monopolies and regulation, the focus of these models has shifted to competing electric companies maximizing their profit in a deregulated electricity market. In recent years, consumers have started to participate in demand response programs, actively influencing electricity load and price in the power system. We introduce a model that features investment and retirement decisions over a long planning horizon of more than 20 years, as well as an hourly representation of day-ahead electricity markets in which sellers of electricity face buyers. This combination makes our model both unique and challenging to solve. Decomposition algorithms, and especially Benders decomposition, can exploit the model structure. We present a novel method that can be seen as an alternative to generalized Benders decomposition and relies on dynamic linear overestimation. We prove its finite convergence and present computational results, demonstrating its superiority over traditional approaches. In certain special cases of our model, all necessary solution values in the decomposition algorithms can be directly calculated and solving mathematical programming problems becomes entirely obsolete. This leads to highly efficient algorithms that drastically outperform their programming problem-based counterparts. Furthermore, we discuss the implementation of all tailored algorithms and the challenges from a modeling software developer's standpoint, providing an insider's look into the modeling language GAMS. Finally, we apply our model to the Texas power system and design two electricity policies motivated by the U.S. Environment Protection Agency's recently proposed CO2 emissions targets for the power sector.
Development of an intelligent diagnostic system for reusable rocket engine control
NASA Technical Reports Server (NTRS)
Anex, R. P.; Russell, J. R.; Guo, T.-H.
1991-01-01
A description of an intelligent diagnostic system for the Space Shuttle Main Engines (SSME) is presented. This system is suitable for incorporation in an intelligent controller which implements accommodating closed-loop control to extend engine life and maximize available performance. The diagnostic system architecture is a modular, hierarchical, blackboard system which is particularly well suited for real-time implementation of a system which must be repeatedly updated and extended. The diagnostic problem is formulated as a hierarchical classification problem in which the failure hypotheses are represented in terms of predefined data patterns. The diagnostic expert system incorporates techniques for priority-based diagnostics, the combination of analytical and heuristic knowledge for diagnosis, integration of different AI systems, and the implementation of hierarchical distributed systems. A prototype reusable rocket engine diagnostic system (ReREDS) has been implemented. The prototype user interface and diagnostic performance using SSME test data are described.
NASA Astrophysics Data System (ADS)
Vera, N. C.; GMMC
2013-05-01
In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.
NASA Astrophysics Data System (ADS)
Latyshev, A. V.; Gordeeva, N. M.
2017-09-01
We obtain an analytic solution of the boundary problem for the behavior (fluctuations) of an electron plasma with an arbitrary degree of degeneracy of the electron gas in the conductive layer in an external electric field. We use the kinetic Vlasov-Boltzmann equation with the Bhatnagar-Gross-Krook collision integral and the Maxwell equation for the electric field. We use the mirror boundary conditions for the reflections of electrons from the layer boundary. The boundary problem reduces to a one-dimensional problem with a single velocity. For this, we use the method of consecutive approximations, linearization of the equations with respect to the absolute distribution of the Fermi-Dirac electrons, and the conservation law for the number of particles. Separation of variables then helps reduce the problem equations to a characteristic system of equations. In the space of generalized functions, we find the eigensolutions of the initial system, which correspond to the continuous spectrum (Van Kampen mode). Solving the dispersion equation, we then find the eigensolutions corresponding to the adjoint and discrete spectra (Drude and Debye modes). We then construct the general solution of the boundary problem by decomposing it into the eigensolutions. The coefficients of the decomposition are given by the boundary conditions. This allows obtaining the decompositions of the distribution function and the electric field in explicit form.
The Subharmonic Behavior and Thresholds of High Frequency Ultrasound Contrast Agents
NASA Astrophysics Data System (ADS)
Allen, John
2016-11-01
Ultrasound contrast agents are encapsulated micro-bubbles used for diagnostic and therapeutic biomedical ultrasound. The agents oscillate nonlinearly about their equilibrium radii upon sufficient acoustic forcing and produce unique acoustic signatures that allow them to be distinguished from scattering from the surrounding tissue. The subharmonic response occurs below the fundamental and is associated with an acoustic pressure threshold. Subharmonic imaging using ultrasound contrast agents has been established for clinical applications at standard diagnostic frequencies typically below 20 MHz. However, for emerging applications of high frequency applications (above 20 MHz) subharmonic imaging is an area of on-going research. The effects of attenuation from tissue are more significant and the characterization of agents is not as well understood. Due to specificity and control production, polymer agents are useful for high frequency applications. In this study, we highlight novel measurement techniques to measure and characterize the mechanical properties of the shell of polymer contrast agents. The definition of the subharmonic threshold is investigated with respect to mono-frequency and chirp forcing waveforms which have been used to achieve optimal subharmonic content in the backscattered signal. Time frequency analysis using the Empirical Mode Decomposition (EMD) and the Hilbert-Huang transform facilitates a more sensitive and robust methodology for characterization of subharmonic content with respect to non-stationary forcing. A new definition of the subharmonic threshold is proposed with respect to the energy content of the associated adaptive basis decomposition. Additional studies with respect to targeted agent behavior and cardiovascular disease are discussed. NIH, ONR.
Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smolinski, B.
The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught inmore » a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.« less
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in
ERIC Educational Resources Information Center
Albanese, Mark A.; Jacobs, Richard M.
1990-01-01
The reliability and validity of a procedure to measure diagnostic-reasoning and problem-solving skills taught in predoctoral orthodontic education were studied using 68 second year dental students. The procedure includes stimulus material and 33 multiple-choice items. It is a feasible way of assessing problem-solving skills in dentistry education…
Users manual for the Variable dimension Automatic Synthesis Program (VASP)
NASA Technical Reports Server (NTRS)
White, J. S.; Lee, H. Q.
1971-01-01
A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.
Diagnostic reasoning strategies and diagnostic success.
Coderre, S; Mandin, H; Harasym, P H; Fick, G H
2003-08-01
Cognitive psychology research supports the notion that experts use mental frameworks or "schemes", both to organize knowledge in memory and to solve clinical problems. The central purpose of this study was to determine the relationship between problem-solving strategies and the likelihood of diagnostic success. Think-aloud protocols were collected to determine the diagnostic reasoning used by experts and non-experts when attempting to diagnose clinical presentations in gastroenterology. Using logistic regression analysis, the study found that there is a relationship between diagnostic reasoning strategy and the likelihood of diagnostic success. Compared to hypothetico-deductive reasoning, the odds of diagnostic success were significantly greater when subjects used the diagnostic strategies of pattern recognition and scheme-inductive reasoning. Two other factors emerged as independent determinants of diagnostic success: expertise and clinical presentation. Not surprisingly, experts outperformed novices, while the content area of the clinical cases in each of the four clinical presentations demonstrated varying degrees of difficulty and thus diagnostic success. These findings have significant implications for medical educators. It supports the introduction of "schemes" as a means of enhancing memory organization and improving diagnostic success.
Empirical mode decomposition of the ECG signal for noise removal
NASA Astrophysics Data System (ADS)
Khan, Jesmin; Bhuiyan, Sharif; Murphy, Gregory; Alam, Mohammad
2011-04-01
Electrocardiography is a diagnostic procedure for the detection and diagnosis of heart abnormalities. The electrocardiogram (ECG) signal contains important information that is utilized by physicians for the diagnosis and analysis of heart diseases. So good quality ECG signal plays a vital role for the interpretation and identification of pathological, anatomical and physiological aspects of the whole cardiac muscle. However, the ECG signals are corrupted by noise which severely limit the utility of the recorded ECG signal for medical evaluation. The most common noise presents in the ECG signal is the high frequency noise caused by the forces acting on the electrodes. In this paper, we propose a new ECG denoising method based on the empirical mode decomposition (EMD). The proposed method is able to enhance the ECG signal upon removing the noise with minimum signal distortion. Simulation is done on the MIT-BIH database to verify the efficacy of the proposed algorithm. Experiments show that the presented method offers very good results to remove noise from the ECG signal.
System-independent characterization of materials using dual-energy computed tomography
Azevedo, Stephen G.; Martz, Jr., Harry E.; Aufderheide, III, Maurice B.; ...
2016-02-01
In this study, we present a new decomposition approach for dual-energy computed tomography (DECT) called SIRZ that provides precise and accurate material description, independent of the scanner, over diagnostic energy ranges (30 to 200 keV). System independence is achieved by explicitly including a scanner-specific spectral description in the decomposition method, and a new X-ray-relevant feature space. The feature space consists of electron density, ρ e, and a new effective atomic number, Z e, which is based on published X-ray cross sections. Reference materials are used in conjunction with the system spectral response so that additional beam-hardening correction is not necessary.more » The technique is tested against other methods on DECT data of known specimens scanned by diverse spectra and systems. Uncertainties in accuracy and precision are less than 3% and 2% respectively for the (ρ e, Z e) results compared to prior methods that are inaccurate and imprecise (over 9%).« less
Triple/quadruple patterning layout decomposition via novel linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2016-03-01
As feature size of the semiconductor technology scales down to 10nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL) and directed self assembly (DSA). Due to the delay of EUVL and EBL, triple and even quadruple patterning are considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, while it is forbidden for contact and via layers. In this paper, we focus on the application of layout decomposition where stitching is not allowed such as for contact and via layers. We propose a linear programming and iterative rounding (LPIR) solving technique to reduce the number of non-integers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
Kocurek, P; Kolomazník, K; Bařinová, M; Hendrych, J
2017-04-01
This paper deals with the problem of chromium recovery from chrome-tanned waste and thus with reducing the environmental impact of the leather industry. Chrome-tanned waste was transformed by alkaline enzymatic hydrolysis promoted by magnesium oxide into practically chromium-free, commercially applicable collagen hydrolysate and filtration cake containing a high portion of chromium. The crude and magnesium-deprived chromium cakes were subjected to a process of thermal decomposition at 650°C under oxygen-free conditions to reduce the amount of this waste and to study the effect of magnesium removal on the resulting products. Oxygen-free conditions were applied in order to prevent the oxidation of trivalent chromium into the hazardous hexavalent form. Thermal decomposition products from both crude and magnesium-deprived chrome cakes were characterized by high chromium content over 50%, which occurred as eskolaite (Cr 2 O 3 ) and magnesiochromite (MgCr 2 O 4 ) crystal phases, respectively. Thermal decomposition decreased the amount of chrome cake dry feed by 90%. Based on the performed experiments, a scheme for the total control of chromium in the leather industry was designed.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
NASA Astrophysics Data System (ADS)
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
Bada, J.L.; Shou, M.-Y.; Man, E.H.; Schroeder, R.A.
1978-01-01
The diagenesis of the hydroxy amino acids serine and threonine in foraminiferal tests has been investigated. The decomposition pathways of these amino acids are complex; the principal reactions appear to be dehydration, aldol cleavage and decarboxylation. Stereochemical studies indicate that the ??-amino-n-butyric acid (ABA) detected in foraminiferal tests is the end product of threonine dehydration pathway. Decomposition of serine and threonine in foraminiferal tests from two well-dated Caribbean deep-sea cores, P6304-8 and -9, has been found to follow irreversible first-order kinetics. Three empirical equations were derived for the disappearance of serine and threonine and the appearance of ABA. These equations can be used as a new geochronological method for dating foraminiferal tests from other deep-sea sediments. Preliminary results suggest that ages deduced from the ABA kinetics equation are most reliable because "species effect" and contamination problems are not important for this nonbiological amino acid. Because of the variable serine and threonine contents of modern foraminiferal species, it is likely that the accurate age estimates can be obtained from the serine and threonine decomposition equations only if a homogeneous species assemblage or single species sample isolated from mixed natural assemblages is used. ?? 1978.
Terascale Optimal PDE Simulations (TOPS) Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Professor Olof B. Widlund
2007-07-09
Our work has focused on the development and analysis of domain decomposition algorithms for a variety of problems arising in continuum mechanics modeling. In particular, we have extended and analyzed FETI-DP and BDDC algorithms; these iterative solvers were first introduced and studied by Charbel Farhat and his collaborators, see [11, 45, 12], and by Clark Dohrmann of SANDIA, Albuquerque, see [43, 2, 1], respectively. These two closely related families of methods are of particular interest since they are used more extensively than other iterative substructuring methods to solve very large and difficult problems. Thus, the FETI algorithms are part ofmore » the SALINAS system developed by the SANDIA National Laboratories for very large scale computations, and as already noted, BDDC was first developed by a SANDIA scientist, Dr. Clark Dohrmann. The FETI algorithms are also making inroads in commercial engineering software systems. We also note that the analysis of these algorithms poses very real mathematical challenges. The success in developing this theory has, in several instances, led to significant improvements in the performance of these algorithms. A very desirable feature of these iterative substructuring and other domain decomposition algorithms is that they respect the memory hierarchy of modern parallel and distributed computing systems, which is essential for approaching peak floating point performance. The development of improved methods, together with more powerful computer systems, is making it possible to carry out simulations in three dimensions, with quite high resolution, relatively easily. This work is supported by high quality software systems, such as Argonne's PETSc library, which facilitates code development as well as the access to a variety of parallel and distributed computer systems. The success in finding scalable and robust domain decomposition algorithms for very large number of processors and very large finite element problems is, e.g., illustrated in [24, 25, 26]. This work is based on [29, 31]. Our work over these five and half years has, in our opinion, helped advance the knowledge of domain decomposition methods significantly. We see these methods as providing valuable alternatives to other iterative methods, in particular, those based on multi-grid. In our opinion, our accomplishments also match the goals of the TOPS project quite closely.« less
ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations
NASA Astrophysics Data System (ADS)
Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil
2018-04-01
In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.
The radiation chemistry of nuclear reactor decontaminating reagents
NASA Astrophysics Data System (ADS)
Sellers, Robin M.
Processes involved in the radiation chemistry of some typical nuclear reactor decontaminating reagents including complexing, reducing and oxidising agents are described. It is concluded that radiation-induced decomposition is only likely to be a problem with dilute formulations, and/or with minor additives such as corrosion inhibitors which are not protected from attack by the other constituents. Addition of a "sacrificial" compound may be necessary to overcome this. The importance of considering loss of function, rather than the decomposition rate of the starting material, is emphasised. Reagents based on low oxidation state metal ions (LOMI) can be regenerated by the radiation field in the presence of formate ion.
Considerations for Storage of High Test Hydrogen Peroxide (HTP) Utilizing Non-Metal Containers
NASA Technical Reports Server (NTRS)
Moore, Robin E.; Scott, Joseph P.; Wise, Harry
2005-01-01
When working with high concentrations of hydrogen peroxide, it is critical that the storage container be constructed of the proper materials, those which will not degrade to the extent that container breakdown or dangerous decomposition occurs. It has been suggested that the only materials that will safely contain the peroxide for a significant period of time are metals of stainless steel construction or aluminum use as High Test Hydrogen Peroxide (HTP) Containers. The stability and decomposition of HTP will be also discussed as well as various means suggested in the literature to minimize these problems. The dangers of excess oxygen generation are also touched upon.
Fast Boundary Element Method for acoustics with the Sparse Cardinal Sine Decomposition
NASA Astrophysics Data System (ADS)
Alouges, François; Aussal, Matthieu; Parolin, Emile
2017-07-01
This paper presents the newly proposed method Sparse Cardinal Sine Decomposition that allows fast convolution on unstructured grids. We focus on its use when coupled with finite element techniques to solve acoustic problems with the (compressed) Boundary Element Method. In addition, we also compare the computational performances of two equivalent Matlab® and Python implementations of the method. We show validation test cases in order to assess the precision of the approach. Eventually, the performance of the method is illustrated by the computation of the acoustic target strength of a realistic submarine from the Benchmark Target Strength Simulation international workshop.
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
Hiremath, S B; Muraleedharan, A; Kumar, S; Nagesh, C; Kesavadas, C; Abraham, M; Kapilamoorthy, T R; Thomas, B
2017-04-01
Tumefactive demyelinating lesions with atypical features can mimic high-grade gliomas on conventional imaging sequences. The aim of this study was to assess the role of conventional imaging, DTI metrics ( p:q tensor decomposition), and DSC perfusion in differentiating tumefactive demyelinating lesions and high-grade gliomas. Fourteen patients with tumefactive demyelinating lesions and 21 patients with high-grade gliomas underwent brain MR imaging with conventional, DTI, and DSC perfusion imaging. Imaging sequences were assessed for differentiation of the lesions. DTI metrics in the enhancing areas and perilesional hyperintensity were obtained by ROI analysis, and the relative CBV values in enhancing areas were calculated on DSC perfusion imaging. Conventional imaging sequences had a sensitivity of 80.9% and specificity of 57.1% in differentiating high-grade gliomas ( P = .049) from tumefactive demyelinating lesions. DTI metrics ( p : q tensor decomposition) and DSC perfusion demonstrated a statistically significant difference in the mean values of ADC, the isotropic component of the diffusion tensor, the anisotropic component of the diffusion tensor, the total magnitude of the diffusion tensor, and rCBV among enhancing portions in tumefactive demyelinating lesions and high-grade gliomas ( P ≤ .02), with the highest specificity for ADC, the anisotropic component of the diffusion tensor, and relative CBV (92.9%). Mean fractional anisotropy values showed no significant statistical difference between tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI and DSC parameters improved the diagnostic accuracy (area under the curve = 0.901). Addition of a heterogeneous enhancement pattern to DTI and DSC parameters improved it further (area under the curve = 0.966). The sensitivity increased from 71.4% to 85.7% after the addition of the enhancement pattern. DTI and DSC perfusion add profoundly to conventional imaging in differentiating tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI metrics and DSC perfusion markedly improved diagnostic accuracy. © 2017 by American Journal of Neuroradiology.
Diagnostic tolerance for missing sensor data
NASA Technical Reports Server (NTRS)
Scarl, Ethan A.
1989-01-01
For practical automated diagnostic systems to continue functioning after failure, they must not only be able to diagnose sensor failures but also be able to tolerate the absence of data from the faulty sensors. It is shown that conventional (associational) diagnostic methods will have combinatoric problems when trying to isolate faulty sensors, even if they adequately diagnose other components. Moreover, attempts to extend the operation of diagnostic capability past sensor failure will necessarily compound those difficulties. Model-based reasoning offers a structured alternative that has no special problems diagnosing faulty sensors and can operate gracefully when sensor data is missing.
Rocket Engine Oscillation Diagnostics
NASA Technical Reports Server (NTRS)
Nesman, Tom; Turner, James E. (Technical Monitor)
2002-01-01
Rocket engine oscillating data can reveal many physical phenomena ranging from unsteady flow and acoustics to rotordynamics and structural dynamics. Because of this, engine diagnostics based on oscillation data should employ both signal analysis and physical modeling. This paper describes an approach to rocket engine oscillation diagnostics, types of problems encountered, and example problems solved. Determination of design guidelines and environments (or loads) from oscillating phenomena is required during initial stages of rocket engine design, while the additional tasks of health monitoring, incipient failure detection, and anomaly diagnostics occur during engine development and operation. Oscillations in rocket engines are typically related to flow driven acoustics, flow excited structures, or rotational forces. Additional sources of oscillatory energy are combustion and cavitation. Included in the example problems is a sampling of signal analysis tools employed in diagnostics. The rocket engine hardware includes combustion devices, valves, turbopumps, and ducts. Simple models of an oscillating fluid system or structure can be constructed to estimate pertinent dynamic parameters governing the unsteady behavior of engine systems or components. In the example problems it is shown that simple physical modeling when combined with signal analysis can be successfully employed to diagnose complex rocket engine oscillatory phenomena.
Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging.
Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D; Joel, Suresh; Pekar, James J; Mostofsky, Stewart H; Caffo, Brian
2012-01-01
Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.
Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging
Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D.; Joel, Suresh; Pekar, James J.; Mostofsky, Stewart H.; Caffo, Brian
2012-01-01
Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD. PMID:22969709
NASA Astrophysics Data System (ADS)
Royle, S. H.; Montgomery, W.; Kounaves, S. P.; Sephton, M. A.
2017-12-01
A number of missions to Mars have analyzed the composition of surface samples using thermal extraction techniques. The temperatures of decomposition have been used as diagnostic information for the materials present. One material of great current interest is perchlorate, a relatively recently discovered component of Mars surface geochemistry that leads to deleterious effects on organic matter during thermal extraction. Knowledge of the thermal decomposition behavior of perchlorate salts is essential for mineral identification and possible avoidance of confounding interactions with organic matter. We have performed a series of stepped pyrolysis experiments on samples of magnesium perchlorate hydrate which were dehydrated to various extents - as confirmed by XRD and FTIR analysis. Our data reveal that the hydration state of magnesium perchlorate has a significant effect on decomposition temperature, with differing temperature releases of oxygen corresponding to different perchlorate hydration states. We find that the peak temperature of oxygen release increases from 500 to 600°C as the proportion of the tetrahydrate form in the sample increases and the hexahydrate form decreases. It was known previously that cation chemistry can affect the temperature of oxygen release and now our work shows that the hydration state of these salts can lead to similar variations. Consequently, incorrect identification of perchlorate species may occur if hydration state is not taken into account and a mixture of metastable hydration states (of one type of perchlorate) may be mistaken for a mixture of perchlorate salts. Our findings are important for Mars as the hydration state of salts in the regolith may change throughout the Martian year due to large variations in humidity and temperature.
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1989-01-01
Problems associated with the sequential approach to multidisciplinary design are discussed. A blackboard model is suggested as a potential tool for implementing the multilevel decomposition approach to overcome these problems. The blackboard model serves as a global database for the solution with each discipline acting as a knowledge source for updating the solution. With this approach, it is possible for engineers to improve the coordination, communication, and cooperation in the conceptual design process, allowing them to achieve a more optimal design from an interdisciplinary standpoint.
Newman-Toker, David E; Austin, J Matthew; Derk, Jordan; Danforth, Melissa; Graber, Mark L
2017-06-27
A 2015 National Academy of Medicine report on improving diagnosis in health care made recommendations for direct action by hospitals and health systems. Little is known about how health care provider organizations are addressing diagnostic safety/quality. This study is an anonymous online survey of safety professionals from US hospitals and health systems in July-August 2016. The survey was sent to those attending a Leapfrog Group webinar on misdiagnosis (n=188). The instrument was focused on knowledge, attitudes, and capability to address diagnostic errors at the institutional level. Overall, 61 (32%) responded, including community hospitals (42%), integrated health networks (25%), and academic centers (21%). Awareness was high, but commitment and capability were low (31% of leaders understand the problem; 28% have sufficient safety resources; and 25% have made diagnosis a top institutional safety priority). Ongoing efforts to improve diagnostic safety were sparse and mostly included root cause analysis and peer review feedback around diagnostic errors. The top three barriers to addressing diagnostic error were lack of awareness of the problem, lack of measures of diagnostic accuracy and error, and lack of feedback on diagnostic performance. The top two tools viewed as critically important for locally tackling the problem were routine feedback on diagnostic performance and culture change to emphasize diagnostic safety. Although hospitals and health systems appear to be aware of diagnostic errors as a major safety imperative, most organizations (even those that appear to be making a strong commitment to patient safety) are not yet doing much to improve diagnosis. Going forward, efforts to activate health care organizations will be essential to improving diagnostic safety.
2014-01-01
Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298
Youngstrom, Eric A
2014-03-01
To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.
Optical diagnostic of breast cancer using Raman, polarimetric and fluorescence spectroscopy
NASA Astrophysics Data System (ADS)
Anwar, Shahzad; Firdous, Shamaraz; Rehman, Aziz-ul; Nawaz, Muhammed
2015-04-01
We presented the optical diagnostic of normal and cancerous human breast tissues using Raman, polarimetric and fluorescence spectroscopic techniques. Breast cancer is the second leading cause of cancer death among women worldwide. Optical diagnostics of cancer offered early intervention and the greatest chance of cure. Spectroscopic data were collected from freshly excised surgical specimens of normal tissues with Raman bands at 800, 1171 and 1530 cm-1 arising mainly by lipids, nucleic acids, proteins, carbohydrates and amino acids. For breast cancer, Raman bands are observed at 1070, 1211, 1495, 1583 and 1650 cm-1. Results demonstrate that the spectra of normal tissue are dominated by lipids and amino acids. Polarization decomposition of the Mueller matrix and confocal microscopic fluorescence provides detailed description of cancerous tissue and distinguishes between the normal and malignant one. Based on these findings, we successfully differentiate normal and malignant breast tissues at an early stage of disease. There is a need to develop a new tool for noninvasive, real-time diagnosis of tissue abnormalities and a test procedure for detecting breast cancer at an early stage.
System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging Modalities
NASA Astrophysics Data System (ADS)
Guan, Huifeng
In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies.
2013-06-24
Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Hoist, Submitted for publication . • Finite...1996. [20] C. LANCZOS, Linear Differential Operators, Dover Publications , Mineola, NY, 1997. [21] G.I. MARCHUK, Adjoint Equations and Analysis of...NUMBER(S) 16. SECURITY CLASSIFICATION OF: 19b. TELEPHONE NUMBER (Include area code) The public reporting burden for this collection of information is
ERIC Educational Resources Information Center
Geary, David C.; Hoard, Mary K.; Nugent, Lara
2012-01-01
Children's (N = 275) use of retrieval, decomposition (e.g., 7 = 4+3 and thus 6+7 = 6+4+3), and counting to solve additional problems was longitudinally assessed from first grade to fourth grade, and intelligence, working memory, and in-class attentive behavior was assessed in one or several grades. The goal was to assess the relation between…
Huang, Furong; Tang, Shuang; Sun, Pei; Luo, Jing
2018-05-15
Novelty and appropriateness are considered the two fundamental features of creative thinking, including insight problem solving, which can be performed through chunk decomposition and constraint relaxation. Based on a previous study that separated the neural bases of novelty and appropriateness in chunk decomposition, in this study, we used event-related functional magnetic resonance imaging (fMRI) to further dissociate these mechanisms in constraint relaxation. Participants were guided to mentally represent the method of problem solving according to the externally provided solutions that were elaborately prepared in advance and systematically varied in their novelty and appropriateness for the given problem situation. The results showed that novelty processing was completed by the temporoparietal junction (TPJ) and regions in the executive system (dorsolateral prefrontal cortex [DLPFC]), whereas appropriateness processing was completed by the TPJ and regions in the episodic memory (hippocampus), emotion (amygdala), and reward systems (orbitofrontal cortex [OFC]). These results likely indicate that appropriateness processing can result in a more memorable and richer experience than novelty processing in constraint relaxation. The shared and distinct neural mechanisms of the features of novelty and appropriateness in constraint relaxation are discussed, enriching the representation of the change theory of insight. Copyright © 2018 Elsevier Inc. All rights reserved.
ALKALINE NONCYANIDE ZINC PLATING WITH REFUSE OF RECOVERED CHEMICALS
A metal finishing process can create environmental problems because it uses chemicals that are not only toxic but also resistant to degradation or decomposition. Study was undertaken at a zinc electroplating operation to achieve zero discharge of wastewater and total recycle of r...
Domain decomposition and matching for time-domain analysis of motions of ships advancing in head sea
NASA Astrophysics Data System (ADS)
Tang, Kai; Zhu, Ren-chuan; Miao, Guo-ping; Fan, Ju
2014-08-01
A domain decomposition and matching method in the time-domain is outlined for simulating the motions of ships advancing in waves. The flow field is decomposed into inner and outer domains by an imaginary control surface, and the Rankine source method is applied to the inner domain while the transient Green function method is used in the outer domain. Two initial boundary value problems are matched on the control surface. The corresponding numerical codes are developed, and the added masses, wave exciting forces and ship motions advancing in head sea for Series 60 ship and S175 containership, are presented and verified. A good agreement has been obtained when the numerical results are compared with the experimental data and other references. It shows that the present method is more efficient because of the panel discretization only in the inner domain during the numerical calculation, and good numerical stability is proved to avoid divergence problem regarding ships with flare.
Material Compatibility with Space Storable Propellants. Design Guidebook
NASA Technical Reports Server (NTRS)
Uney, P. E.; Fester, D. A.
1972-01-01
An important consideration in the design of spacecraft for interplanetary missions is the compatibility of storage materials with the propellants. Serious problems can arise because many propellants are either extremely reactive or subject to catalytic decomposition, making the selection of proper materials of construction for propellant containment and control a critical requirement for the long-life applications. To aid in selecting materials and designing and evaluating various propulsion subsystems, available information on the compatibility of spacecraft materials with propellants of interest was compiled from literature searches and personal contacts. The compatibility of both metals and nonmetals with hydrazine, monomethyl hydrazine, nitrated hydrazine, and diborance fuels and nitrogen tetroxide, fluorine, oxygen difluoride, and Flox oxidizers was surveyed. These fuels and oxidizers encompass the wide variety of problems encountered in propellant storage. As such, they present worst case situations of the propellant affecting the material and the material affecting the propellant. This includes material attack, propellant decomposition, and the formation of clogging materials.
GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-04-01
Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.
Exact solutions for the collaborative pickup and delivery problem.
Gansterer, Margaretha; Hartl, Richard F; Salzmann, Philipp E H
2018-01-01
In this study we investigate the decision problem of a central authority in pickup and delivery carrier collaborations. Customer requests are to be redistributed among participants, such that the total cost is minimized. We formulate the problem as multi-depot traveling salesman problem with pickups and deliveries. We apply three well-established exact solution approaches and compare their performance in terms of computational time. To avoid unrealistic solutions with unevenly distributed workload, we extend the problem by introducing minimum workload constraints. Our computational results show that, while for the original problem Benders decomposition is the method of choice, for the newly formulated problem this method is clearly dominated by the proposed column generation approach. The obtained results can be used as benchmarks for decentralized mechanisms in collaborative pickup and delivery problems.
Torque and Learning and Behavior Problems in Children.
ERIC Educational Resources Information Center
Zendel, Ivan H.; Pihl, R. O.
1980-01-01
Findings indicate minimal differences, on diagnostic tests, between children who exhibited torque and those who did not. Torque is defined as the circling of any X in a clockwise direction. Torque is not associated with learning problems in school. Diagnostic utility of torque should be carefully considered. (Author)
An exact algorithm for optimal MAE stack filter design.
Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior
2007-02-01
We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.
This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less
Program Helps Decompose Complex Design Systems
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Hall, Laura E.
1995-01-01
DeMAID (Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problems such as large platforms in outer space. Groups modular subsystems on basis of interactions among them. Saves considerable amount of money and time in total design process, particularly in new design problem in which order of modules has not been defined. Originally written for design problems, also applicable to problems containing modules (processes) that take inputs and generate outputs. Available in three machine versions: Macintosh written in Symantec's Think C 3.01, Sun, and SGI IRIS in C language.
Artificial intelligence in robot control systems
NASA Astrophysics Data System (ADS)
Korikov, A.
2018-05-01
This paper analyzes modern concepts of artificial intelligence and known definitions of the term "level of intelligence". In robotics artificial intelligence system is defined as a system that works intelligently and optimally. The author proposes to use optimization methods for the design of intelligent robot control systems. The article provides the formalization of problems of robotic control system design, as a class of extremum problems with constraints. Solving these problems is rather complicated due to the high dimensionality, polymodality and a priori uncertainty. Decomposition of the extremum problems according to the method, suggested by the author, allows reducing them into a sequence of simpler problems, that can be successfully solved by modern computing technology. Several possible approaches to solving such problems are considered in the article.
Development of a low cost test rig for standalone WECS subject to electrical faults.
Himani; Dahiya, Ratna
2016-11-01
In this paper, a contribution to the development of low-cost wind turbine (WT) test rig for stator fault diagnosis of wind turbine generator is proposed. The test rig is developed using a 2.5kW, 1750 RPM DC motor coupled to a 1.5kW, 1500 RPM self-excited induction generator interfaced with a WT mathematical model in LabVIEW. The performance of the test rig is benchmarked with already proven wind turbine test rigs. In order to detect the stator faults using non-stationary signals in self-excited induction generator, an online fault diagnostic technique of DWT-based multi-resolution analysis is proposed. It has been experimentally proven that for varying wind conditions wavelet decomposition allows good differentiation between faulty and healthy conditions leading to an effective diagnostic procedure for wind turbine condition monitoring. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Lin, Yu-Hsuan; Lin, Yu-Cheng; Lee, Yang-Han; Lin, Po-Hsien; Lin, Sheng-Hsuan; Chang, Li-Ren; Tseng, Hsien-Wei; Yen, Liang-Yu; Yang, Cheryl C H; Kuo, Terry B J
2015-06-01
Global smartphone penetration has brought about unprecedented addictive behaviors. We report a proposed diagnostic criteria and the designing of a mobile application (App) to identify smartphone addiction. We used a novel empirical mode decomposition (EMD) to delineate the trend in smartphone use over one month. The daily use count and the trend of this frequency are associated with smartphone addiction. We quantify excessive use by daily use duration and frequency, as well as the relationship between the tolerance symptoms and the trend for the median duration of a use epoch. The psychiatrists' assisted self-reporting use time is significant lower than and the recorded total smartphone use time via the App and the degree of underestimation was positively correlated with actual smartphone use. Our study suggests the identification of smartphone addiction by diagnostic interview and via the App-generated parameters with EMD analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...
Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements
NASA Astrophysics Data System (ADS)
Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.
2000-11-01
In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.
NASA Astrophysics Data System (ADS)
Magee, Daniel J.; Niemeyer, Kyle E.
2018-03-01
The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.
Cost decomposition of linear systems with application to model reduction
NASA Technical Reports Server (NTRS)
Skelton, R. E.
1980-01-01
A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.
Enhancements to the Design Manager's Aide for Intelligent Decomposition (DeMAID)
NASA Technical Reports Server (NTRS)
Rogers, James L.; Barthelemy, Jean-Francois M.
1992-01-01
This paper discusses the addition of two new enhancements to the program Design Manager's Aide for Intelligent Decomposition (DeMAID). DeMAID is a knowledge-based tool used to aid a design manager in understanding the interactions among the tasks of a complex design problem. This is done by ordering the tasks to minimize feedback, determining the participating subsystems, and displaying them in an easily understood format. The two new enhancements include (1) rules for ordering a complex assembly process and (2) rules for determining which analysis tasks must be re-executed to compute the output of one task based on a change in input to that or another task.
Enhancements to the Design Manager's Aide for Intelligent Decomposition (DeMaid)
NASA Technical Reports Server (NTRS)
Rogers, James L.; Barthelemy, Jean-Francois M.
1992-01-01
This paper discusses the addition of two new enhancements to the program Design Manager's Aide for Intelligent Decomposition (DeMAID). DeMAID is a knowledge-based tool used to aid a design manager in understanding the interactions among the tasks of a complex design problem. This is done by ordering the tasks to minimize feedback, determining the participating subsystems, and displaying them in an easily understood format. The two new enhancements include (1) rules for ordering a complex assembly process and (2) rules for determining which analysis tasks must be re-executed to compute the output of one task based on a change in input to that or another task.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
The Design Manager's Aid for Intelligent Decomposition (DeMAID)
NASA Technical Reports Server (NTRS)
Rogers, James L.
1994-01-01
Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed, the proposed system can be decomposed to identify its hierarchical structure. The design manager's aid for intelligent decomposition (DeMAID) is a knowledge based system for ordering the sequence of modules and identifying a possible multilevel structure for design. Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save considerable money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined.
Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods
NASA Astrophysics Data System (ADS)
Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.
2017-04-01
In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.
NASA Technical Reports Server (NTRS)
Chung, T. J. (Editor); Karr, Gerald R. (Editor)
1989-01-01
Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.
Model of critical diagnostic reasoning: achieving expert clinician performance.
Harjai, Prashant Kumar; Tiwari, Ruby
2009-01-01
Diagnostic reasoning refers to the analytical processes used to determine patient health problems. While the education curriculum and health care system focus on training nurse clinicians to accurately recognize and rescue clinical situations, assessments of non-expert nurses have yielded less than satisfactory data on diagnostic competency. The contrast between the expert and non-expert nurse clinician raises the important question of how differences in thinking may contribute to a large divergence in accurate diagnostic reasoning. This article recognizes superior organization of one's knowledge base, using prototypes, and quick retrieval of pertinent information, using similarity recognition as two reasons for the expert's superior diagnostic performance. A model of critical diagnostic reasoning, using prototypes and similarity recognition, is proposed and elucidated using case studies. This model serves as a starting point toward bridging the gap between clinical data and accurate problem identification, verification, and management while providing a structure for a knowledge exchange between expert and non-expert clinicians.
NASA Astrophysics Data System (ADS)
Kakekhani, Arvin; Ismail-Beigi, Sohrab
2014-03-01
NOx are regulated pollutants produced during automotive combustion. As part of an effort to design catalysts for NOx decomposition that operate in oxygen rich environment and permit greater fuel efficiency, we study chemistry of NOx on (001) ferroelectric surfaces. Changing the polarization at such surfaces modifies electronic properties and leads to switchable surface chemistry. Using first principles theory, our previous work has shown that addition of catalytic RuO2 monolayer on ferroelectric PbTiO3 surface makes direct decomposition of NO thermodynamically favorable for one polarization. Furthermore, the usual problem of blockage of catalytic sites by strong oxygen binding is overcome by flipping polarization that helps desorb the oxygen. We describe a thermodynamic cycle for direct NO decomposition followed by desorption of N2 and O2. We provide energy barriers and transition states for key steps of the cycle as well as describing their dependence on polarization direction. We end by pointing out how a switchable order parameter of substrate,in this case ferroelectric polarization, allows us to break away from some standard compromises for catalyst design(e.g. the Sabatier principle). This enlarges the set of potentially catalytic metals. Primary support from Toyota Motor Engineering and Manufacturing, North America, Inc.
Incremental k-core decomposition: Algorithms and evaluation
Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...
2016-02-01
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less
Application of Dynamic Logic Algorithm to Inverse Scattering Problems Related to Plasma Diagnostics
NASA Astrophysics Data System (ADS)
Perlovsky, L.; Deming, R. W.; Sotnikov, V.
2010-11-01
In plasma diagnostics scattering of electromagnetic waves is widely used for identification of density and wave field perturbations. In the present work we use a powerful mathematical approach, dynamic logic (DL), to identify the spectra of scattered electromagnetic (EM) waves produced by the interaction of the incident EM wave with a Langmuir soliton in the presence of noise. The problem is especially difficult since the spectral amplitudes of the noise pattern are comparable with the amplitudes of the scattered waves. In the past DL has been applied to a number of complex problems in artificial intelligence, pattern recognition, and signal processing, resulting in revolutionary improvements. Here we demonstrate its application to plasma diagnostic problems. [4pt] Perlovsky, L.I., 2001. Neural Networks and Intellect: using model-based concepts. Oxford University Press, New York, NY.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
Gilbert, Paul A; Marzell, Miesha
2017-11-29
Drinkers who report some symptoms of alcohol-use disorder (AUD) but fail to meet full criteria are "diagnostic orphans." To improve risk-reduction efforts, we sought to develop better epidemiologic profiles of this underrecognized subgroup. This study estimated the population prevalence and described AUD symptoms of diagnostic orphans using the 2012-2013 National Epidemiological Survey of Alcohol and Related Conditions-III. Multivariate logistic regression was used to model odds of being a diagnostic orphan or meeting mild, moderate, and severe AUD criteria versus no AUD symptoms. Models were adjusted for the complex survey design using sampling weights and survey procedures (e.g., proc surveylogistic). Among drinkers, 14% of men and 11% of women were classified as diagnostic orphans. The most common symptoms were drinking more or for longer periods than intended, wanting or trying unsuccessfully to quit or cut back, and drinking in ways that increased risk of injury. We noted broad similarities between diagnostic orphans and mild/moderate AUD groups. There were no differences in odds of diagnostic orphans status by race/ethnicity; however, female gender was associated with lower odds of diagnostic orphan status and all levels of AUD. Individual history of AUD, family history of problem drinking, concurrent smoking, and concurrent marijuana use were associated with greater odds of problem drinking, with stronger associations as AUD severity increased. Diagnostic orphans remain a sizeable and overlooked population of problem drinkers. Clarifying the array of symptoms and cooccurring disorders can improve screening and facilitate alcohol risk-reduction intervention efforts.
Wave Phenomena in an Acoustic Resonant Chamber
ERIC Educational Resources Information Center
Smith, Mary E.; And Others
1974-01-01
Discusses the design and operation of a high Q acoustical resonant chamber which can be used to demonstrate wave phenomena such as three-dimensional normal modes, Q values, densities of states, changes in the speed of sound, Fourier decomposition, damped harmonic oscillations, sound-absorbing properties, and perturbation and scattering problems.…
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
Determination of selenium in the environment and in biological material.
Bem, E M
1981-01-01
This paper reviews the following problems, sampling, decomposition procedures and most important analytical methods used for selenium determination, e.g., neutron activation analysis, atomic absorption spectrometry, gas-liquid chromatography, spectrophotometry, fluorimetry, and x-ray fluorescence. This review covers the literature mainly from 1975 to 1977. PMID:7007035
A Decomposition Approach for Shipboard Manpower Scheduling
2009-01-01
generalizes the bin-packing problem with no conflicts ( BPP ) which is known to be NP-hard (Garey and Johnson 1979). Hence our focus is to obtain a lower...to the BPP ; while the so called constrained packing lower bound also takes conflict constraints into account. Their computational study indicates
Terminological debate over language impairment in children: forward movement and sticking points
Reilly, Sheena; Bishop, Dorothy V M; Tomblin, Bruce
2014-01-01
Background There is no agreed terminology for describing childhood language problems. In this special issue Reilly et al. and Bishop review the history of the most widely used label, ‘specific language impairment’ (SLI), and discuss the pros and cons of various terms. Commentators from a range of backgrounds, in terms of both discipline and geographical background, were then invited to respond to each lead article. Aims To summarize the main points made by the commentators and identify (1) points of consensus and disagreement, (2) issues for debate including the drivers for change and diagnostic criteria, and (3) the way forward. Conclusions & Implications There was some common ground, namely that the current situation is not tenable because it impedes clinical and research progress and impacts on access to services. There were also wide-ranging disagreements about which term should be adopted. However, before debating the broad diagnostic label it is essential to consider the diagnostic criteria and the systems used to classify childhood language problems. This is critical in order to facilitate communication between and among clinicians and researchers, across sectors (in particular health and education), with the media and policy-makers and with families and individuals who have language problems. We suggest four criteria be taken into account when establishing diagnostic criteria, including: (1) the features of language, (2) the impact on functioning and participation, (3) the presence/absence of other impairments, and (4) the language trajectory or pathway and age of onset. In future, these criteria may expand to include the genetic and neural markers for language problems. Finally, there was overarching agreement about the need for an international and multidisciplinary forum to move this debate forward. The purpose would be to develop consensus regarding the diagnostic criteria and diagnostic label for children with language problems. This process should include canvassing the views of families and people with language problems as well as the views of policy-makers. PMID:25142092
Portable parallel portfolio optimization in the Aurora Financial Management System
NASA Astrophysics Data System (ADS)
Laure, Erwin; Moritsch, Hans
2001-07-01
Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.
NASA Technical Reports Server (NTRS)
Shooman, Martin L.; Cortes, Eladio R.
1991-01-01
The network-complexity of LANs and of LANs that are interconnected by bridges and routers poses a challenging reliability-modeling problem. The present effort toward these problems' solution attempts to simplify them by reducing their number of states through truncation and state merging, as suggested by Shooman and Laemmel (1990). Through the use of state merging, it becomes possible to reduce the Bateman-Cortes 161 state model to a two state model with a closed-form solution. In the case of coupled networks, a technique which allows for problem-decomposition must be used.
Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition
NASA Astrophysics Data System (ADS)
Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen
2017-04-01
Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.
Rudraraju, Shiva; Van der Ven, Anton; Garikipati, Krishna
2016-06-10
Here, we present a phenomenological treatment of diffusion-driven martensitic phase transformations in multi-component crystalline solids that arise from non-convex free energies in mechanical and chemical variables. The treatment describes diffusional phase transformations that are accompanied by symmetry-breaking structural changes of the crystal unit cell and reveals the importance of a mechanochemical spinodal, defined as the region in strain-composition space, where the free-energy density function is non-convex. The approach is relevant to phase transformations wherein the structural order parameters can be expressed as linear combinations of strains relative to a high-symmetry reference crystal. The governing equations describing mechanochemical spinodal decomposition aremore » variationally derived from a free-energy density function that accounts for interfacial energy via gradients of the rapidly varying strain and composition fields. A robust computational framework for treating the coupled, higher-order diffusion and nonlinear strain gradient elasticity problems is presented. Because the local strains in an inhomogeneous, transforming microstructure can be finite, the elasticity problem must account for geometric nonlinearity. An evaluation of available experimental phase diagrams and first-principles free energies suggests that mechanochemical spinodal decomposition should occur in metal hydrides such as ZrH 2-2c. The rich physics that ensues is explored in several numerical examples in two and three dimensions, and the relevance of the mechanism is discussed in the context of important electrode materials for Li-ion batteries and high-temperature ceramics.« less
Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A
2006-01-01
Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
A Diagnostic Taxonomy of Adult Career Problems.
ERIC Educational Resources Information Center
Campbell, Robert E.; Cellini, James V.
1981-01-01
Developed a taxonomy for the differential diagnosis of adult career development problems. Problem categories identified were: (1) problems in career decision making; (2) problems in implementing career plans; (3) problems in organizational/institutional performance; and (4) problems in organizational/institutional adaption. (Author)
Modification of the Integrated Sasang Constitutional Diagnostic Model
Nam, Jiho
2017-01-01
In 2012, the Korea Institute of Oriental Medicine proposed an objective and comprehensive physical diagnostic model to address quantification problems in the existing Sasang constitutional diagnostic method. However, certain issues have been raised regarding a revision of the proposed diagnostic model. In this paper, we propose various methodological approaches to address the problems of the previous diagnostic model. Firstly, more useful variables are selected in each component. Secondly, the least absolute shrinkage and selection operator is used to reduce multicollinearity without the modification of explanatory variables. Thirdly, proportions of SC types and age are considered to construct individual diagnostic models and classify the training set and the test set for reflecting the characteristics of the entire dataset. Finally, an integrated model is constructed with explanatory variables of individual diagnosis models. The proposed integrated diagnostic model significantly improves the sensitivities for both the male SY type (36.4% → 62.0%) and the female SE type (43.7% → 64.5%), which were areas of limitation of the previous integrated diagnostic model. The ideas of these new algorithms are expected to contribute not only to the scientific development of Sasang constitutional medicine in Korea but also to that of other diagnostic methods for traditional medicine. PMID:29317897
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Gumaste, U.; Ronaghi, M.
1994-01-01
Applications are described of high-performance parallel, computation for the analysis of complete jet engines, considering its multi-discipline coupled problem. The coupled problem involves interaction of structures with gas dynamics, heat conduction and heat transfer in aircraft engines. The methodology issues addressed include: consistent discrete formulation of coupled problems with emphasis on coupling phenomena; effect of partitioning strategies, augmentation and temporal solution procedures; sensitivity of response to problem parameters; and methods for interfacing multiscale discretizations in different single fields. The computer implementation issues addressed include: parallel treatment of coupled systems; domain decomposition and mesh partitioning strategies; data representation in object-oriented form and mapping to hardware driven representation, and tradeoff studies between partitioning schemes and fully coupled treatment.
DOT National Transportation Integrated Search
1982-04-01
A comprehensive review of existing basic diagnostic techniques applicable to the railcar roller bearing defect and failure problem was made. Of the potentially feasible diagnostic techniques identified, high frequency vibration was selected for exper...
The solution of the optimization problem of small energy complexes using linear programming methods
NASA Astrophysics Data System (ADS)
Ivanin, O. A.; Director, L. B.
2016-11-01
Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.
Stability region maximization by decomposition-aggregation method. [Skylab stability
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Cuk, S. M.
1974-01-01
This work is to improve the estimates of the stability regions by formulating and resolving a proper maximization problem. The solution of the problem provides the best estimate of the maximal value of the structural parameter and at the same time yields the optimum comparison system, which can be used to determine the degree of stability of the Skylab. The analysis procedure is completely computerized, resulting in a flexible and powerful tool for stability considerations of large-scale linear as well as nonlinear systems.
Fuzzy spaces topology change as a possible solution to the black hole information loss paradox
NASA Astrophysics Data System (ADS)
Silva, C. A. S.
2009-06-01
The black hole information loss paradox is one of the most intricate problems in modern theoretical physics. A proposal to solve this is one related with topology change. However it has found some obstacles related to unitarity and cluster decomposition (locality). In this Letter we argue that modelling the black hole's event horizon as a noncommutative manifold - the fuzzy sphere - we can solve the problems with topology change, getting a possible solution to the black hole information loss paradox.
Computational structures for robotic computations
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chang, P. R.
1987-01-01
The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.
Solving the multi-frequency electromagnetic inverse source problem by the Fourier method
NASA Astrophysics Data System (ADS)
Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi
2018-07-01
This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.
Xia, Hong; Luo, Zhendong
2017-01-01
In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.
ERIC Educational Resources Information Center
International Labour Office, Islamabad (Pakistan). Asian and Pacific Skill Development Programme.
This screening and diagnostic procedure is intended to identify Level 1 adults with specific learning problems. The adults not meeting criteria on the assessments for visual and auditory functions should be referred to proper medical services for full evaluations. A prescriptive teaching program should be specifically designed to meet needs of…
Mathematical Frameworks for Diagnostics, Prognostics and Condition Based Maintenance Problems
2008-08-15
REPORT Mathematical Frameworks for Diagnostics, Prognostics and Condition Based Maintenance Problems (W911NF-05-1-0426) 14. ABSTRACT 16. SECURITY ...other documentation. 12. DISTRIBUTION AVAILIBILITY STATEMENT Approved for Public Release; Distribution Unlimited 9. SPONSORING/MONITORING AGENCY NAME...parallel and distributed computing environment were researched. In support of the Condition Based Maintenance (CBM) philosophy, a theoretical framework
ERIC Educational Resources Information Center
Finch, Curtis R.
The objective of this study was to investigate the effects of three methods of teaching diagnostic problem-solving (troubleshooting) to automotive students. The sample consisted of 45 community college students enrolled in automotive courses. Initially, all students received a presentation on ignition principles, and the Otis Mental Ability Test…
The Multiscale Robin Coupled Method for flows in porous media
NASA Astrophysics Data System (ADS)
Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.
2018-02-01
A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.
NASA Technical Reports Server (NTRS)
Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir
1995-01-01
The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.
NASA Astrophysics Data System (ADS)
Kuo, Kenneth K.; Lu, Yeu-Cherng; Chiaverini, Martin J.; Harting, George C.; Johnson, David K.; Serin, Nadir
The experimental study on the fundamental processes involved in fuel decomposition and boundary-layer combustion in hybrid rocket motors is continuously being conducted at the High Pressure Combustion Laboratory of The Pennsylvania State University. This research will provide a useful engineering technology base in the development of hybrid rocket motors as well as a fundamental understanding of the complex processes involved in hybrid propulsion. A high-pressure, 2-D slab motor has been designed, manufactured, and utilized for conducting seven test firings using HTPB fuel processed at PSU. A total of 20 fuel slabs have been received from the Mcdonnell Douglas Aerospace Corporation. Ten of these fuel slabs contain an array of fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. Diagnostic instrumentation used in the test include high-frequency pressure transducers for measuring static and dynamic motor pressures and fine-wire thermocouples for measuring solid fuel surface and subsurface temperatures. The ultrasonic pulse-echo technique as well as a real-time x-ray radiography system have been used to obtain independent measurements of instantaneous solid fuel regression rates.
NASA Astrophysics Data System (ADS)
Gruber, Ralph; Periaux, Jaques; Shaw, Richard Paul
Recent advances in computational mechanics are discussed in reviews and reports. Topics addressed include spectral superpositions on finite elements for shear banding problems, strain-based finite plasticity, numerical simulation of hypersonic viscous continuum flow, constitutive laws in solid mechanics, dynamics problems, fracture mechanics and damage tolerance, composite plates and shells, contact and friction, metal forming and solidification, coupling problems, and adaptive FEMs. Consideration is given to chemical flows, convection problems, free boundaries and artificial boundary conditions, domain-decomposition and multigrid methods, combustion and thermal analysis, wave propagation, mixed and hybrid FEMs, integral-equation methods, optimization, software engineering, and vector and parallel computing.
When Does Changing Representation Improve Problem-Solving Performance?
NASA Technical Reports Server (NTRS)
Holte, Robert; Zimmer, Robert; MacDonald, Alan
1992-01-01
The aim of changing representation is the improvement of problem-solving efficiency. For the most widely studied family of methods of change of representation it is shown that the value of a single parameter, called the expulsion factor, is critical in determining (1) whether the change of representation will improve or degrade problem-solving efficiency and (2) whether the solutions produced using the change of representation will or will not be exponentially longer than the shortest solution. A method of computing the expansion factor for a given change of representation is sketched in general and described in detail for homomorphic changes of representation. The results are illustrated with homomorphic decompositions of the Towers of Hanoi problem.
NASA Astrophysics Data System (ADS)
Clayton, J. D.
2017-02-01
A theory of deformation of continuous media based on concepts from Finsler differential geometry is presented. The general theory accounts for finite deformations, nonlinear elasticity, and changes in internal state of the material, the latter represented by elements of a state vector of generalized Finsler space whose entries consist of one or more order parameter(s). Two descriptive representations of the deformation gradient are considered. The first invokes an additive decomposition and is applied to problems involving localized inelastic deformation mechanisms such as fracture. The second invokes a multiplicative decomposition and is applied to problems involving distributed deformation mechanisms such as phase transformations or twinning. Appropriate free energy functions are posited for each case, and Euler-Lagrange equations of equilibrium are derived. Solutions are obtained for specific problems of tensile fracture of an elastic cylinder and for amorphization of a crystal under spherical and uniaxial compression. The Finsler-based approach is demonstrated to be more general and potentially more physically descriptive than existing hyperelasticity models couched in Riemannian geometry or Euclidean space, without incorporation of supplementary ad hoc equations or spurious fitting parameters. Predictions for single crystals of boron carbide ceramic agree qualitatively, and in many instances quantitatively, with results from physical experiments and atomic simulations involving structural collapse and failure of the crystal along its c-axis.
Liquid fuel reforming using microwave plasma at atmospheric pressure
NASA Astrophysics Data System (ADS)
Miotk, Robert; Hrycak, Bartosz; Czylkowski, Dariusz; Dors, Miroslaw; Jasinski, Mariusz; Mizeraczyk, Jerzy
2016-06-01
Hydrogen is expected to be one of the most promising energy carriers. Due to the growing interest in hydrogen production technologies, in this paper we present the results of experimental investigations of thermal decomposition and dry reforming of two alcohols (ethanol and isopropanol) in the waveguide-supplied metal-cylinder-based nozzleless microwave (915 MHz) plasma source (MPS). The hydrogen production experiments were preceded by electrodynamics properties investigations of the used MPS and plasma spectroscopic diagnostics. All experimental tests were performed with the working gas (nitrogen or carbon dioxide) flow rate ranging from 1200 to 3900 normal litres per hour and an absorbed microwave power up to 5 kW. The alcohols were introduced into the plasma using an induction heating vaporizer. The ethanol thermal decomposition resulted in hydrogen selectivity up to 100%. The hydrogen production rate was up to 1150 NL(H2) h-1 and the energy yield was 267 NL(H2) kWh-1 of absorbed microwave energy. Due to intense soot production, the thermal decomposition process was not appropriate for isopropanol conversion. Considering the dry reforming process, using isopropanol was more efficient in hydrogen production than ethanol. The rate and energy yield of hydrogen production were up to 1116 NL(H2) h-1 and 223 NL(H2) kWh-1 of microwave energy used, respectively. However, the hydrogen selectivity was no greater than 37%. Selected results given by the experiment were compared with the results of numerical modeling.
Rutland, J Brian; Sheets, Tilman; Young, Tony
2007-12-01
This exploratory study examines a subset of mobile phone use, the compulsive use of short message service (SMS) text messaging. A measure of SMS use, the SMS Problem Use Diagnostic Questionnaire (SMS-PUDQ), was developed and found to possess acceptable reliability and validity when compared to other measures such as self-reports of time spent using SMS and scores on a survey of problem mobile phone use. Implications for the field of addiction research, technological and behavioral addictions in particular, are discussed, and directions for future research are suggested.
Weak characteristic information extraction from early fault of wind turbine generator gearbox
NASA Astrophysics Data System (ADS)
Xu, Xiaoli; Liu, Xiuli
2017-09-01
Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
Newton–Hooke-type symmetry of anisotropic oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, P.M., E-mail: zhpm@impcas.ac.cn; Horvathy, P.A., E-mail: horvathy@lmpt.univ-tours.fr; Laboratoire de Mathématiques et de Physique Théorique, Université de Tours
2013-06-15
Rotation-less Newton–Hooke-type symmetry, found recently in the Hill problem, and instrumental for explaining the center-of-mass decomposition, is generalized to an arbitrary anisotropic oscillator in the plane. Conversely, the latter system is shown, by the orbit method, to be the most general one with such a symmetry. Full Newton–Hooke symmetry is recovered in the isotropic case. Star escape from a galaxy is studied as an application. -- Highlights: ► Rotation-less Newton–Hooke (NH) symmetry is generalized to an arbitrary anisotropic oscillator. ► The orbit method is used to find the most general case for rotation-less NH symmetry. ► The NH symmetry ismore » decomposed into Heisenberg algebras based on chiral decomposition.« less
The next organizational challenge: finding and addressing diagnostic error.
Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H
2014-03-01
Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.
USDA-ARS?s Scientific Manuscript database
Invasive species and acid rain cause global environmental problems. Limited information exists, however, concerning the effects of acid rain on the invasiveness of these plants. For example, creeping daisy, an invasive exotic allelopathic weed, has caused great damage in southern China where acid ra...
ERIC Educational Resources Information Center
Laski, Elida V.; Vasilyeva, Marina; Schiffman, Joanna
2016-01-01
Understanding of base 10 and place value are important foundational math concepts that are associated with higher use of decomposition strategies and higher accuracy on addition problems (Laski, Ermakova, & Vasilyeva, 2014; Fuson, 1990; Fuson & Briars, 1990; National Research Council, 2001). The current study examined base-10 knowledge,…
Zhang, Ming Jin; Chen, Liang Hua; Zhang, Jian; Yang, Wan Qin; Liu, Hua; Li, Xun; Zhang, Yan
2016-03-01
Nowadays large areas of plantations have caused serious ecological problems such as soil degradation and biodiversity decline. Artificial tending thinning and construction of mixed forest are frequently used ways when we manage plantations. To understand the effect of this operation mode on nutrient cycle of plantation ecosystem, we detected the dynamics of microbial bio-mass carbon and nitrogen during foliar litter decomposition of Pinus massoniana and Toona ciliate in seven types of gap in different sizes (G 1 : 100 m 2 , G 2 : 225 m 2 , G 3 : 400 m 2 , G 4 : 625 m 2 , G 5 : 900 m 2 , G 6 : 1225 m 2 , G 7 : 1600 m 2 ) of 42-year-old P. massoniana plantations in a hilly area of the upper Yang-tze River. The results showed that small and medium-sized forest gaps(G 1 -G 5 ) were more advantageous for the increment of microbial biomass carbon and nitrogen in the process of foliar litter decomposition. Along with the foliar litter decomposition during the experiment (360 d), microbial biomass carbon (MBC), microbial biomass nitrogen (MBN) in P. massoniana foliar litter and MBN in T. ciliata foliar litter first increased and then decreased, and respectively reached the maxima 9.87, 0.22 and 0.80 g·kg -1 on the 180 th d. But the peak (44.40 g·kg -1 ) of MBC in T. ciliata foliar litter appeared on the 90 th d. Microbial biomass carbon and nitrogen in T. ciliate was significantly higher than that of P. massoniana during foliar litter decomposition. Microbial biomass carbon and nitrogen in foliar litter was not only significantly associated with average daily temperature and the water content of foliar litter, but also closely related to the change of the quality of litter. Therefore, in the thinning, forest gap size could be controlled in the range of from 100 to 900 m 2 to facilitate the increase of microbial biomass carbon and nitrogen in the process of foliar litter decomposition, accelerate the decomposition of foliar litter and improve soil fertility of plantations.
ERIC Educational Resources Information Center
Albanese, Mark A.; Jacobs, Richard M.
Preliminary psychometric data assessing the reliability and validity of a method used to measure the diagnostic reasoning and problem-solving skills of predoctoral students in orthodontia are described. The measurement approach consisted of sets of patient demographic data and dental photos and x-rays, accompanied by a set of 33 multiple-choice…
ERIC Educational Resources Information Center
Lampert, T. L.; Polanczyk, G.; Tramontina, S.; Mardini, V.; Rohde, L. A.
2004-01-01
Objective: To evaluate the diagnostic performance of the Attention Problem Scale of the Child Behavior Checklist (CBCL-APS) for the screening of Attention-Deficit/Hyperactivity Disorder (ADHD) in a sample of Brazilian children and adolescents. Methods: The CBCL-APS was given to 763 children and adolescents. Child psychiatrists using DSM-IV…
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.
1997-12-31
The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less
NASA Astrophysics Data System (ADS)
Chang, Der-Chen; Markina, Irina; Wang, Wei
2016-09-01
The k-Cauchy-Fueter operator D0(k) on one dimensional quaternionic space H is the Euclidean version of spin k / 2 massless field operator on the Minkowski space in physics. The k-Cauchy-Fueter equation for k ≥ 2 is overdetermined and its compatibility condition is given by the k-Cauchy-Fueter complex. In quaternionic analysis, these complexes play the role of Dolbeault complex in several complex variables. We prove that a natural boundary value problem associated to this complex is regular. Then by using the theory of regular boundary value problems, we show the Hodge-type orthogonal decomposition, and the fact that the non-homogeneous k-Cauchy-Fueter equation D0(k) u = f on a smooth domain Ω in H is solvable if and only if f satisfies the compatibility condition and is orthogonal to the set ℋ(k)1 (Ω) of Hodge-type elements. This set is isomorphic to the first cohomology group of the k-Cauchy-Fueter complex over Ω, which is finite dimensional, while the second cohomology group is always trivial.
NASA Astrophysics Data System (ADS)
Pan, Xiao-Min; Wei, Jian-Gong; Peng, Zhen; Sheng, Xin-Qing
2012-02-01
The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm.
NASA Astrophysics Data System (ADS)
Sibileau, Alberto; Auricchio, Ferdinando; Morganti, Simone; Díez, Pedro
2018-01-01
Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.
Nitrogen conservation and acidity control during food wastes composting through struvite formation.
Wang, Xuan; Selvam, Ammaiyappan; Chan, Manting; Wong, Jonathan W C
2013-11-01
One of the main problems of food waste composting is the intensive acidification due to initial rapid fermentation that retards decomposition efficiency. Lime addition overcame this problem, but resulted in significant loss of nitrogen as ammonia that reduces the nutrient contents of composts. Therefore, this study investigated the feasibility of struvite formation as a strategy to control pH and reduce nitrogen loss during food waste composting. MgO and K2HPO4 were added to food waste in different molar ratios (P1, 1:1; P2, 1:2), and composted in 20-L composters. Results indicate that K2HPO4 buffered the pH in treatment P2 besides supplementing phosphate into the compost. In P2, organic decomposition reached 64% while the formation of struvite effectively reduced the nitrogen loss from 40.8% to 23.3% during composting. However, electrical conductivity of the compost increased due to the addition of Mg and P salts that requires further investigation to improve this technology. Copyright © 2013 Elsevier Ltd. All rights reserved.
Koopman decomposition of Burgers' equation: What can we learn?
NASA Astrophysics Data System (ADS)
Page, Jacob; Kerswell, Rich
2017-11-01
Burgers' equation is a well known 1D model of the Navier-Stokes equations and admits a selection of equilibria and travelling wave solutions. A series of Burgers' trajectories are examined with Dynamic Mode Decomposition (DMD) to probe the capability of the method to extract coherent structures from ``run-down'' simulations. The performance of the method depends critically on the choice of observable. We use the Cole-Hopf transformation to derive an observable which has linear, autonomous dynamics and for which the DMD modes overlap exactly with Koopman modes. This observable can accurately predict the flow evolution beyond the time window of the data used in the DMD, and in that sense outperforms other observables motivated by the nonlinearity in the governing equation. The linearizing observable also allows us to make informed decisions about often ambiguous choices in nonlinear problems, such as rank truncation and snapshot spacing. A number of rules of thumb for connecting DMD with the Koopman operator for nonlinear PDEs are distilled from the results. Related problems in low Reynolds number fluid turbulence are also discussed.
Decomposition method for zonal resource allocation problems in telecommunication networks
NASA Astrophysics Data System (ADS)
Konnov, I. V.; Kashuba, A. Yu
2016-11-01
We consider problems of optimal resource allocation in telecommunication networks. We first give an optimization formulation for the case where the network manager aims to distribute some homogeneous resource (bandwidth) among users of one region with quadratic charge and fee functions and present simple and efficient solution methods. Next, we consider a more general problem for a provider of a wireless communication network divided into zones (clusters) with common capacity constraints. We obtain a convex quadratic optimization problem involving capacity and balance constraints. By using the dual Lagrangian method with respect to the capacity constraint, we suggest to reduce the initial problem to a single-dimensional optimization problem, but calculation of the cost function value leads to independent solution of zonal problems, which coincide with the above single region problem. Some results of computational experiments confirm the applicability of the new methods.
Xing, Gusheng; Wang, Shuang; Li, Chenrui; Zhao, Xinming; Zhou, Chunwu
2015-03-01
To investigate the value of quantitative iodine-based material decomposition images with gemstone spectral CT imaging in the follow-up of patients with hepatocellular carcinoma (HCC) after transcatheter arterial chemoebolization (TACE). Consecutive 32 HCC patients with previous TACE treatment were included in this study. For the follow-up, arterial phase (AP) and venous phase (VP) dual-phase CT scans were performed with a single-source dual-energy CT scanner (Discovery CT 750HD, GE Healthcare). Iodine concentrations were derived from iodine-based material-decomposition images in the liver parenchyma, tumors and coagulation necrosis (CN) areas. The iodine concentration difference (ICD) between the arterial-phase (AP) and venal-phase (VP) were quantitatively evaluated in different tissues.The lesion-to-normal parenchyma iodine concentration ratio (LNR) was calculated. ROC analysis was performed for the qualitative evaluation, and the area under ROC (Az) was calculated to represent the diagnostic ability of ICD and LNR. In all the 32 HCC patients, the region of interesting (ROI) for iodine concentrations included liver parenchyma (n=42), tumors (n=28) and coagulation necrosis (n=24). During the AP the iodine concentration of CNs (median value 0.088 µg/mm(3)) appeared significantly higher than that of the tumors (0.064 µg/mm(3), P=0.022) and liver parenchyma (0.048 µg/mm(3), P=0.005). But it showed no significant difference between liver parenchyma and tumors (P=0.454). During the VP the iodine concentration in hepatic parenchyma (median value 0.181 µg/mm(3)) was significantly higher than that in CNs (0.140 µg/mm(3), P=0.042). There was no significant difference between liver parenchyma and tumors, CNs and tumors (both P>0.05). The median value of ICD in CNs was 0.006 µg/mm(3), significantly lower than that of the HCC (0.201 µg/mm(3), P<0.001) and hepatic parenchyma (0.117 µg/mm(3), P<0.001). The ICDs in tumors and hepatic parenchyma showed no significant difference (P=0.829). During the AP, the LNR had no significant difference between CNs and tumors (a median value 1.805 vs. 1.310, P=0.389), and during the VP, the difference was also non-significant (the median value 0.647 vs. 0.713, P=0.660). The mean Az value of ICDs for evaluation of surviving tumor tissues was 0.804, whiles LNR measured a disappointing result in both AV images and VP images. Quantitative iodine-based material decomposition images with gemstone spectral CT imaging can improve the diagnostic efficacy of CT imaging for HCC patients after TACE treatment.
Diagnostic reasoning by hospital pharmacists: assessment of attitudes, knowledge, and skills.
Chernushkin, Kseniya; Loewen, Peter; de Lemos, Jane; Aulakh, Amneet; Jung, Joanne; Dahri, Karen
2012-07-01
Hospital pharmacists participate in activities that may be considered diagnostic. Two reasoning approaches to diagnosis have been described: non-analytic and analytic. Of the 6 analytic traditions, the probabilistic tradition has been shown to improve diagnostic accuracy and reduce unnecessary testing. To the authors' knowledge, pharmacists' attitudes toward having a diagnostic role and their diagnostic knowledge and skills have never been studied. To describe pharmacists' attitudes toward the role of diagnosis in pharmacotherapeutic problem-solving and to characterize the extent of pharmacists' knowledge and skills related to diagnostic literacy. Pharmacists working within Lower Mainland Pharmacy Services (British Columbia) who spent at least 33% of their time in direct patient care were invited to participate in a prospective observational survey. The survey sought information about demographic characteristics and attitudes toward diagnosis. Diagnostic knowledge and skills were tested by means of 3 case scenarios. The analysis included simple descriptive statistics and inferential statistics to evaluate relationships between responses and experience and training. Of 266 pharmacists invited to participate, 94 responded. The attitudes section of the survey was completed by 90 pharmacists; of these, 80 (89%) agreed with the definition of "diagnosis" proposed in the survey, and 83 (92%) agreed that it is important for pharmacists to have diagnosis-related skills. Respondents preferred an analytic to a non-analytic approach to diagnostic decision-making. The probabilistic tradition was not the preferred method in any of the 3 cases. In evaluating 5 clinical scenarios that might require diagnostic skills, on average 84% of respondents agreed that they should be involved in assessing such problems. Respondents' knowledge of and ability to apply probabilistic diagnostic tools were highest for test sensitivity (average of 61% of respondents with the correct answers) and lower for test specificity (average of 48% with correct answers) and likelihood ratios (average of 39% with correct answers). Respondents to this survey strongly believed that diagnostic skills were important for solving drug-related problems, but they demonstrated low levels of knowledge and ability to apply concepts of probabilistic diagnostic reasoning. Opportunities to expand pharmacists' knowledge of diagnostic reasoning exist, and the findings reported here indicate that pharmacists would consider such professional development valuable.
NASA Astrophysics Data System (ADS)
Cafiero, M.; Lloberas-Valls, O.; Cante, J.; Oliver, J.
2016-04-01
A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806
Signal decomposition for surrogate modeling of a constrained ultrasonic design space
NASA Astrophysics Data System (ADS)
Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.
2018-04-01
The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.
Ordering Design Tasks Based on Coupling Strengths
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Bloebaum, C. L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
Yang, Haixuan; Seoighe, Cathal
2016-01-01
Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.
Ordering design tasks based on coupling strengths
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Bloebaum, Christina L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
ERIC Educational Resources Information Center
Gierl, Mark J.; Cui, Ying
2008-01-01
One promising application of diagnostic classification models (DCM) is in the area of cognitive diagnostic assessment in education. However, the successful application of DCM in educational testing will likely come with a price--and this price may be in the form of new test development procedures and practices required to yield data that satisfy…
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
Extending substructure based iterative solvers to multiple load and repeated analyses
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1993-01-01
Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.
Decomposition and model selection for large contingency tables.
Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter
2010-04-01
Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
Oxygen Mass Flow Rate Generated for Monitoring Hydrogen Peroxide Stability
NASA Technical Reports Server (NTRS)
Ross, H. Richard
2002-01-01
Recent interest in propellants with non-toxic reaction products has led to a resurgence of interest in hydrogen peroxide for various propellant applications. Because peroxide is sensitive to contaminants, material interactions, stability and storage issues, monitoring decomposition rates is important. Stennis Space Center (SSC) uses thermocouples to monitor bulk fluid temperature (heat evolution) to determine reaction rates. Unfortunately, large temperature rises are required to offset the heat lost into the surrounding fluid. Also, tank penetration to accomodate a thermocouple can entail modification of a tank or line and act as a source of contamination. The paper evaluates a method for monitoring oxygen evolution as a means to determine peroxide stability. Oxygen generation is not only directly related to peroxide decomposition, but occurs immediately. Measuring peroxide temperature to monitor peroxide stability has significant limitations. The bulk decomposition of 1% / week in a large volume tank can produce in excess of 30 cc / min. This oxygen flow rate corresponds to an equivalent temperature rise of approximately 14 millidegrees C, which is difficult to measure reliably. Thus, if heat transfer were included, there would be no temperature rise. Temperature changes from the surrounding environment and heat lost to the peroxide will also mask potential problems. The use of oxygen flow measurements provides an ultra sensitive technique for monitoring reaction events and will provide an earlier indication of an abnormal decomposition when compared to measuring temperature rise.
Diagnostics of breast cancer by analysis of spectra diffuse reflections
NASA Astrophysics Data System (ADS)
Kuzmina, Natalya V.; Plaksin, Fedor G.; Polovnikov, Eugeny S.
2001-05-01
The work is dedicated to problems of diagnostic oncologic diseases by a spectroscopic-optical method and is prolongation of long-term examinations held earlier by Vovk S.M, Naumov S.A. and Pushkarev S.V. The actual spectra of a diffuse reflection removed in vivo and in vitro are given, is angry- and good-quality neoplasms, healthy tissue and blood of breast and other organs. Problems of a clinical oncology are in a center of attention in medicine because the cases of disease malignant swellings increase, which is stipulated by an irregularity of present methods of diagnostic.
Some Cognitive Components of the Diagnostic Thinking Process.
ERIC Educational Resources Information Center
Gale, Janet
1982-01-01
Identifies 14 cognitive components of the diagnostic thinking process in clinical problem solving. Analyzes the differences between medical students, hospital house officers, and hospital registrars in London, England, on the relative use of such thinking processes. Suggests that diagnostic thinking processes cannot be incorporated into medical…
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2016-12-01
Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.
Application of response surface techniques to helicopter rotor blade optimization procedure
NASA Technical Reports Server (NTRS)
Henderson, Joseph Lynn; Walsh, Joanne L.; Young, Katherine C.
1995-01-01
In multidisciplinary optimization problems, response surface techniques can be used to replace the complex analyses that define the objective function and/or constraints with simple functions, typically polynomials. In this work a response surface is applied to the design optimization of a helicopter rotor blade. In previous work, this problem has been formulated with a multilevel approach. Here, the response surface takes advantage of this decomposition and is used to replace the lower level, a structural optimization of the blade. Problems that were encountered and important considerations in applying the response surface are discussed. Preliminary results are also presented that illustrate the benefits of using the response surface.
Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharroubi, Idris, E-mail: kharroubi@ceremade.dauphine.fr; Lim, Thomas, E-mail: lim@ensiie.fr; Ngoupeyou, Armand, E-mail: armand.ngoupeyou@univ-paris-diderot.fr
2013-12-15
In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory.
Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems
NASA Astrophysics Data System (ADS)
Zuchowski, Loïc; Brun, Michael; De Martin, Florent
2018-05-01
The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
A partitioning strategy for nonuniform problems on multiprocessors
NASA Technical Reports Server (NTRS)
Berger, M. J.; Bokhari, S.
1985-01-01
The partitioning of a problem on a domain with unequal work estimates in different subddomains is considered in a way that balances the work load across multiple processors. Such a problem arises for example in solving partial differential equations using an adaptive method that places extra grid points in certain subregions of the domain. A binary decomposition of the domain is used to partition it into rectangles requiring equal computational effort. The communication costs of mapping this partitioning onto different microprocessors: a mesh-connected array, a tree machine and a hypercube is then studied. The communication cost expressions can be used to determine the optimal depth of the above partitioning.
Cooperative path following control of multiple nonholonomic mobile robots.
Cao, Ke-Cai; Jiang, Bin; Yue, Dong
2017-11-01
Cooperative path following control problem of multiple nonholonomic mobile robots has been considered in this paper. Based on the framework of decomposition, the cooperative path following problem has been transformed into path following problem and cooperative control problem; Then cascaded theory of non-autonomous system has been employed in the design of controllers without resorting to feedback linearization. One time-varying coordinate transformation based on dilation has been introduced to solve the uncontrollable problem of nonholonomic robots when the whole group's reference converges to stationary point. Cooperative path following controllers for nonholonomic robots have been proposed under persistent reference or reference target that converges to stationary point respectively. Simulation results using Matlab have illustrated the effectiveness of the obtained theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang
2016-11-10
Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF₆ HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods.
Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang
2016-01-01
Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF6 HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods. PMID:27834902
Albarrak, Abdulrahman; Coenen, Frans; Zheng, Yalin
2017-01-01
Three-dimensional (3D) (volumetric) diagnostic imaging techniques are indispensable with respect to the diagnosis and management of many medical conditions. However there is a lack of automated diagnosis techniques to facilitate such 3D image analysis (although some support tools do exist). This paper proposes a novel framework for volumetric medical image classification founded on homogeneous decomposition and dictionary learning. In the proposed framework each image (volume) is recursively decomposed until homogeneous regions are arrived at. Each region is represented using a Histogram of Oriented Gradients (HOG) which is transformed into a set of feature vectors. The Gaussian Mixture Model (GMM) is then used to generate a "dictionary" and the Improved Fisher Kernel (IFK) approach is used to encode feature vectors so as to generate a single feature vector for each volume, which can then be fed into a classifier generator. The principal advantage offered by the framework is that it does not require the detection (segmentation) of specific objects within the input data. The nature of the framework is fully described. A wide range of experiments was conducted with which to analyse the operation of the proposed framework and these are also reported fully in the paper. Although the proposed approach is generally applicable to 3D volumetric images, the focus for the work is 3D retinal Optical Coherence Tomography (OCT) images in the context of the diagnosis of Age-related Macular Degeneration (AMD). The results indicate that excellent diagnostic predictions can be produced using the proposed framework. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hiwatashi, Akio; Togao, Osamu; Yamashita, Koji; Kikuchi, Kazufumi; Yoshimoto, Koji; Mizoguchi, Masahiro; Suzuki, Satoshi O; Yoshiura, Takashi; Honda, Hiroshi
2016-07-01
Correction of contrast leakage is recommended when enhancing lesions during perfusion analysis. The purpose of this study was to assess the diagnostic performance of computed tomography perfusion (CTP) with a delay-invariant singular-value decomposition algorithm (SVD+) and a Patlak plot in differentiating glioblastomas from lymphomas. This prospective study included 17 adult patients (12 men and 5 women) with pathologically proven glioblastomas (n=10) and lymphomas (n=7). CTP data were analyzed using SVD+ and a Patlak plot. The relative tumor blood volume and flow compared to contralateral normal-appearing gray matter (rCBV and rCBF derived from SVD+, and rBV and rFlow derived from the Patlak plot) were used to differentiate between glioblastomas and lymphomas. The Mann-Whitney U test and receiver operating characteristic (ROC) analyses were used for statistical analysis. Glioblastomas showed significantly higher rFlow (3.05±0.49, mean±standard deviation) than lymphomas (1.56±0.53; P<0.05). There were no statistically significant differences between glioblastomas and lymphomas in rBV (2.52±1.57 vs. 1.03±0.51; P>0.05), rCBF (1.38±0.41 vs. 1.29±0.47; P>0.05), or rCBV (1.78±0.47 vs. 1.87±0.66; P>0.05). ROC analysis showed the best diagnostic performance with rFlow (Az=0.871), followed by rBV (Az=0.771), rCBF (Az=0.614), and rCBV (Az=0.529). CTP analysis with a Patlak plot was helpful in differentiating between glioblastomas and lymphomas, but CTP analysis with SVD+ was not. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Requirements Analysis and Modeling with Problem Frames and SysML: A Case Study
NASA Astrophysics Data System (ADS)
Colombo, Pietro; Khendek, Ferhat; Lavazza, Luigi
Requirements analysis based on Problem Frames is getting an increasing attention in the academic community and has the potential to become of relevant interest also for industry. However the approach lacks an adequate notational support and methodological guidelines, and case studies that demonstrate its applicability to problems of realistic complexity are still rare. These weaknesses may hinder its adoption. This paper aims at contributing towards the elimination of these weaknesses. We report on an experience in analyzing and specifying the requirements of a controller for traffic lights of an intersection using Problem Frames in combination with SysML. The analysis was performed by decomposing the problem, addressing the identified sub-problems, and recomposing them while solving the identified interferences. The experience allowed us to identify certain guidelines for decomposition and re-composition patterns.
Contrast-enhanced spectral mammography in patients with MRI contraindications.
Richter, Vivien; Hatterman, Valerie; Preibsch, Heike; Bahrs, Sonja D; Hahn, Markus; Nikolaou, Konstantin; Wiesinger, Benjamin
2017-01-01
Background Contrast-enhanced spectral mammography (CESM) is a novel breast imaging technique providing comparable diagnostic accuracy to breast magnetic resonance imaging (MRI). Purpose To show that CESM in patients with MRI contraindications is feasible, accurate, and useful as a problem-solving tool, and to highlight its limitations. Material and Methods A total of 118 patients with MRI contraindications were examined by CESM. Histology was obtained in 94 lesions and used as gold standard for diagnostic accuracy calculations. Imaging data were reviewed retrospectively for feasibility, accuracy, and technical problems. The diagnostic yield of CESM as a problem-solving tool and for therapy response evaluation was reviewed separately. Results CESM was more accurate than mammography (MG) for lesion categorization (r = 0.731, P < 0.0001 vs. r = 0.279, P = 0.006) and for lesion size estimation (r = 0.738 vs. r = 0.689, P < 0.0001). Negative predictive value of CESM was significantly higher than of MG (85.71% vs. 30.77%, P < 0.0001). When used for problem-solving, CESM changed patient management in 2/8 (25%) cases. Superposition artifacts and timing problems affected diagnostic utility in 3/118 (2.5%) patients. Conclusion CESM is a feasible and accurate alternative for patients with MRI contraindications, but it is necessary to be aware of the method's technical limitations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Kang, S; Eom, J
Purpose: Photon-counting detectors (PCDs) allow multi-energy X-ray imaging without additional exposures and spectral overlap. This capability results in the improvement of accuracy of material decomposition for dual-energy X-ray imaging and the reduction of radiation dose. In this study, the PCD-based contrast-enhanced dual-energy mammography (CEDM) was compared with the conventional CDEM in terms of radiation dose, image quality and accuracy of material decomposition. Methods: A dual-energy model was designed by using Beer-Lambert’s law and rational inverse fitting function for decomposing materials from a polychromatic X-ray source. A cadmium zinc telluride (CZT)-based PCD, which has five energy thresholds, and iodine solutions includedmore » in a 3D half-cylindrical phantom, which composed of 50% glandular and 50% adipose tissues, were simulated by using a Monte Carlo simulation tool. The low- and high-energy images were obtained in accordance with the clinical exposure conditions for the conventional CDEM. Energy bins of 20–33 and 34–50 keV were defined from X-ray energy spectra simulated at 50 kVp with different dose levels for implementing the PCD-based CDEM. The dual-energy mammographic techniques were compared by means of absorbed dose, noise property and normalized root-mean-square error (NRMSE). Results: Comparing to the conventional CEDM, the iodine solutions were clearly decomposed for the PCD-based CEDM. Although the radiation dose for the PCD-based CDEM was lower than that for the conventional CEDM, the PCD-based CDEM improved the noise property and accuracy of decomposition images. Conclusion: This study demonstrates that the PCD-based CDEM allows the quantitative material decomposition, and reduces radiation dose in comparison with the conventional CDEM. Therefore, the PCD-based CDEM is able to provide useful information for detecting breast tumor and enhancing diagnostic accuracy in mammography.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Shaojie; Tang Xiangyang; School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121
2012-09-15
Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation ofmore » interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain multiscale decomposition, the proposed method is anticipated to be useful in advanced clinical and preclinical applications where the interview sampling rate varies.« less
NASA Astrophysics Data System (ADS)
Cha, Moon Hoe
2007-02-01
The NearFar program is a package for carrying out an interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. The program is implemented in Java to perform numerical operations on the nearside and farside angular distributions. It contains a graphical display interface for the numerical results. A test run has been applied to the elastic O16+Si28 scattering at E=1503 MeV. Program summaryTitle of program: NearFar Catalogue identifier: ADYP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYP_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: designed for any machine capable of running Java, developed on PC-Pentium-4 Operating systems under which the program has been tested: Microsoft Windows XP (Home Edition) Program language used: Java Number of bits in a word: 64 Memory required to execute with typical data: case dependent No. of lines in distributed program, including test data, etc.: 3484 Number of bytes distributed program, including test data, etc.: 142 051 Distribution format: tar.gz Other software required: A Java runtime interpreter, or the Java Development Kit, version 5.0 Nature of physical problem: Interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. Method of solution: The user must supply a external data file or PPSM parameters which calculates theoretical values of the quantities to be decomposed. Typical running time: Problem dependent. In a test run, it is about 35 s on a 2.40 GHz Intel P4-processor machine.
NASA Astrophysics Data System (ADS)
Li, Qianxiao; Dietrich, Felix; Bollt, Erik M.; Kevrekidis, Ioannis G.
2017-10-01
Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD)51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.
Li, Qianxiao; Dietrich, Felix; Bollt, Erik M; Kevrekidis, Ioannis G
2017-10-01
Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD) 51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.
Likelihood-Based Clustering of Meta-Analytic SROC Curves
ERIC Educational Resources Information Center
Holling, Heinz; Bohning, Walailuck; Bohning, Dankmar
2012-01-01
Meta-analysis of diagnostic studies experience the common problem that different studies might not be comparable since they have been using a different cut-off value for the continuous or ordered categorical diagnostic test value defining different regions for which the diagnostic test is defined to be positive. Hence specificities and…
Carter, Cristina; Akar-Ghibril, Nicole; Sestokas, Jeff; Dixon, Gabrina; Bradford, Wilhelmina; Ottolini, Mary
2018-03-01
Oral case presentations provide an opportunity for trainees to communicate diagnostic reasoning at the bedside. However, few tools exist to enable faculty to provide effective feedback. We developed a tool to assess diagnostic reasoning and communication during oral case presentations. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Carpentier, Pierre-Luc
In this thesis, we consider the midterm production planning problem (MTPP) of hydroelectricity generation under uncertainty. The aim of this problem is to manage a set of interconnected hydroelectric reservoirs over several months. We are particularly interested in high dimensional reservoir systems that are operated by large hydroelectricity producers such as Hydro-Quebec. The aim of this thesis is to develop and evaluate different decomposition methods for solving the MTPP under uncertainty. This thesis is divided in three articles. The first article demonstrates the applicability of the progressive hedging algorithm (PHA), a scenario decomposition method, for managing hydroelectric reservoirs with multiannual storage capacity under highly variable operating conditions in Canada. The PHA is a classical stochastic optimization method designed to solve general multistage stochastic programs defined on a scenario tree. This method works by applying an augmented Lagrangian relaxation on non-anticipativity constraints (NACs) of the stochastic program. At each iteration of the PHA, a sequence of subproblems must be solved. Each subproblem corresponds to a deterministic version of the original stochastic program for a particular scenario in the scenario tree. Linear and a quadratic terms must be included in subproblem's objective functions to penalize any violation of NACs. An important limitation of the PHA is due to the fact that the number of subproblems to be solved and the number of penalty terms increase exponentially with the branching level in the tree. This phenomenon can make the application of the PHA particularly difficult when the scenario tree covers several tens of time periods. Another important limitation of the PHA is caused by the fact that the difficulty level of NACs generally increases as the variability of scenarios increases. Consequently, applying the PHA becomes particularly challenging in hydroclimatic regions that are characterized by a high level of seasonal and interannual variability. These two types of limitations can slow down the algorithm's convergence rate and increase the running time per iteration. In this study, we apply the PHA on Hydro-Quebec's power system over a 92-week planning horizon. Hydrologic uncertainty is represented by a scenario tree containing 6 branching stages and 1,635 nodes. The PHA is especially well-suited for this particular application given that the company already possess a deterministic optimization model to solve the MTPP. The second article presents a new approach which enhances the performance of the PHA for solving general Mstochastic programs. The proposed method works by applying a multiscenario decomposition scheme on the stochastic program. Our heuristic method aims at constructing an optimal partition of the scenario set by minimizing the number of NACs on which an augmented Lagrangean relaxation must be applied. Each subproblem is a stochastic program defined on a group of scenarios. NACs linking scenarios sharing a common group are represented implicitly in subproblems by using a group-node system index instead of the traditional scenario-time index system. Only the NACs that link the different scenario groups are represented explicitly and relaxed. The proposed method is evaluated numerically on an hydroelectric reservoir management problem in Quebec. The results of this experiment show that our method has several advantages. Firstly, it allows to reduce the running time per iteration of the PHA by reducing the number of penalty terms that are included in the objective function and by reducing the amount of duplicated constraints and variables. In turn, this allows to reduce the running time per iteration of the algorithm. Secondly, it allows to increase the algorithm's convergence rate by reducing the variability of intermediary solutions at duplicated tree nodes. Thirdly, our approach reduces the amount of random-access memory (RAM) required for storing Lagrange multipliers associated with relaxed NACs. The third article presents an extension of the L-Shaped method designed specifically for managing hydroelectric reservoir systems with a high storage capacity. The method proposed in this paper enables to consider a higher branching level than conventional decomposition method enables. To achieve this, we assume that the stochastic process driving random parameters has a memory loss at time period t = tau. Because of this assumption, the scenario tree possess a special symmetrical structure at the second stage (t > tau). We exploit this feature using a two-stage Benders decomposition method. Each decomposition stage covers several consecutive time periods. The proposed method works by constructing a convex and piecewise linear recourse function that represents the expected cost at the second stage in the master problem. The subproblem and the master problem are stochastic program defined on scenario subtrees and can be solved using a conventional decomposition method or directly. We test the proposed method on an hydroelectric power system in Quebec over a 104-week planning horizon. (Abstract shortened by UMI.).
Li, Ming; Miao, Chunyan; Leung, Cyril
2015-01-01
Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches. PMID:26690162
Li, Ming; Miao, Chunyan; Leung, Cyril
2015-12-04
Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches.
Computational Study for Planar Connected Dominating Set Problem
NASA Astrophysics Data System (ADS)
Marzban, Marjan; Gu, Qian-Ping; Jia, Xiaohua
The connected dominating set (CDS) problem is a well studied NP-hard problem with many important applications. Dorn et al. [ESA2005, LNCS3669,pp95-106] introduce a new technique to generate 2^{O(sqrt{n})} time and fixed-parameter algorithms for a number of non-local hard problems, including the CDS problem in planar graphs. The practical performance of this algorithm is yet to be evaluated. We perform a computational study for such an evaluation. The results show that the size of instances can be solved by the algorithm mainly depends on the branchwidth of the instances, coinciding with the theoretical result. For graphs with small or moderate branchwidth, the CDS problem instances with size up to a few thousands edges can be solved in a practical time and memory space. This suggests that the branch-decomposition based algorithms can be practical for the planar CDS problem.
ERIC Educational Resources Information Center
Arslan, Harika Ozge; Cigdemoglu, Ceyhan; Moseley, Christine
2012-01-01
This study describes the development and validation of a three-tier multiple-choice diagnostic test, the atmosphere-related environmental problems diagnostic test (AREPDiT), to reveal common misconceptions of global warming (GW), greenhouse effect (GE), ozone layer depletion (OLD), and acid rain (AR). The development of a two-tier diagnostic test…
DFT investigations of hydrogen storage materials
NASA Astrophysics Data System (ADS)
Wang, Gang
Hydrogen serves as a promising new energy source having no pollution and abundant on earth. However the most difficult problem of applying hydrogen is to store it effectively and safely, which is smartly resolved by attempting to keep hydrogen in some metal hydrides to reach a high hydrogen density in a safe way. There are several promising metal hydrides, the thermodynamic and chemical properties of which are to be investigated in this dissertation. Sodium alanate (NaAlH4) is one of the promising metal hydrides with high hydrogen storage capacity around 7.4 wt. % and relatively low decomposition temperature of around 100 °C with proper catalyst. Sodium hydride is a product of the decomposition of NaAlH4 that may affect the dynamics of NaAlH4. The two materials with oxygen contamination such as OH- may influence the kinetics of the dehydriding/rehydriding processes. Thus the solid solubility of OH - groups (NaOH) in NaAlH4 and NaH is studied theoretically by DFT calculations. Magnesium boride [Mg(BH4)2] is has higher hydrogen capacity about 14.9 wt. % and the decomposition temparture of around 250 °C. However one flaw restraining its application is that some polyboron compounds like MgB12H12 preventing from further release of hydrogen. Adding some transition metals that form magnesium transition metal ternary borohydride [MgaTMb(BH4)c] may simply the decomposition process to release hydrogen with ternary borides (MgaTMbBc). The search for the probable ternary borides and the corresponding pseudo phase diagrams as well as the decomposition thermodynamics are performed using DFT calculations and GCLP method to present some possible candidates.
Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*
Bank, R.; Falgout, R. D.; Jones, T.; ...
2015-10-29
In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less
NASA Astrophysics Data System (ADS)
Khachaturov, R. V.
2014-06-01
A mathematical model of X-ray reflection and scattering by multilayered nanostructures in the quasi-optical approximation is proposed. X-ray propagation and the electric field distribution inside the multilayered structure are considered with allowance for refraction, which is taken into account via the second derivative with respect to the depth of the structure. This model is used to demonstrate the possibility of solving inverse problems in order to determine the characteristics of irregularities not only over the depth (as in the one-dimensional problem) but also over the length of the structure. An approximate combinatorial method for system decomposition and composition is proposed for solving the inverse problems.
An efficient variable projection formulation for separable nonlinear least squares problems.
Gan, Min; Li, Han-Xiong
2014-05-01
We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
The Diagnostic/Therapeutic Preabortion Interview.
ERIC Educational Resources Information Center
Boekelheide, Priscilla Day
1978-01-01
The therapeutic and diagnostic aspect of the preabortion interview are discussed with attention to specifics that will identify students with the greatest likelihood for psychological problems and/or repeat abortions. (JD)
C-2W Magnetic Measurement Suite
NASA Astrophysics Data System (ADS)
Roche, T.; Thompson, M. C.; Griswold, M.; Knapp, K.; Koop, B.; Ottaviano, A.; Tobin, M.; TAE, Tri Alpha Energy, Inc. Team
2017-10-01
Commissioning and early operations are underway on C-2W, Tri Alpha Energy's new FRC experiment. The increased complexity level of this machine requires an equally enhanced diagnostic capability. A fundamental component of any magnetically confined fusion experiment is a firm understanding of the magnetic field itself. C-2W is outfitted with over 700 magnetic field probes, 550 internal and 150 external. Innovative in-vacuum annular flux loop / B-dot combination probes will provide information about plasma shape, size, pressure, energy, total temperature, and trapped flux when coupled with establish theoretical interpretations. The massive Mirnov array, consisting of eight rings of eight 3D probes, will provide detailed information about plasma motion, stability, and MHD modal content with the aid of singular value decomposition (SVD) analysis. Internal Rogowski probes will detect the presence of axial currents flowing in the plasma jet in multiple axial locations. Initial data from this array of diagnostics will be presented along with some interpretation and discussion of the analysis techniques used.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, G. H.
1985-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, B. H.
1984-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.
Research on air and missile defense task allocation based on extended contract net protocol
NASA Astrophysics Data System (ADS)
Zhang, Yunzhi; Wang, Gang
2017-10-01
Based on the background of air and missile defense distributed element corporative engagement, the interception task allocation problem of multiple weapon units with multiple targets under network condition is analyzed. Firstly, a mathematical model of task allocation is established by combat task decomposition. Secondly, the initialization assignment based on auction contract and the adjustment allocation scheme based on swap contract were introduced to the task allocation. Finally, through the simulation calculation of typical situation, the model can be used to solve the task allocation problem in complex combat environment.
A modify ant colony optimization for the grid jobs scheduling problem with QoS requirements
NASA Astrophysics Data System (ADS)
Pu, Xun; Lu, XianLiang
2011-10-01
Job scheduling with customers' quality of service (QoS) requirement is challenging in grid environment. In this paper, we present a modify Ant colony optimization (MACO) for the Job scheduling problem in grid. Instead of using the conventional construction approach to construct feasible schedules, the proposed algorithm employs a decomposition method to satisfy the customer's deadline and cost requirements. Besides, a new mechanism of service instances state updating is embedded to improve the convergence of MACO. Experiments demonstrate the effectiveness of the proposed algorithm.
Aerospace plane guidance using geometric control theory
NASA Technical Reports Server (NTRS)
Van Buren, Mark A.; Mease, Kenneth D.
1990-01-01
A reduced-order method employing decomposition, based on time-scale separation, of the 4-D state space in a 2-D slow manifold and a family of 2-D fast manifolds is shown to provide an excellent approximation to the full-order minimum-fuel ascent trajectory. Near-optimal guidance is obtained by tracking the reduced-order trajectory. The tracking problem is solved as regulation problems on the family of fast manifolds, using the exact linearization methodology from nonlinear geometric control theory. The validity of the overall guidance approach is indicated by simulation.
The Compressible Stokes Flows with No-Slip Boundary Condition on Non-Convex Polygons
NASA Astrophysics Data System (ADS)
Kweon, Jae Ryong
2017-03-01
In this paper we study the compressible Stokes equations with no-slip boundary condition on non-convex polygons and show a best regularity result that the solution can have without subtracting corner singularities. This is obtained by a suitable Helmholtz decomposition: {{{u}}={{w}}+nablaφ_R} with div w = 0 and a potential φ_R. Here w is the solution for the incompressible Stokes problem and φ_R is defined by subtracting from the solution of the Neumann problem the leading two corner singularities at non-convex vertices.